The theory of embodied music cognition states that the perception and cognition of music is firmly, although not exclusively, linked to action patterns associated with that music. In this regard, the focus lies mostly on how music promotes certain action tendencies (i.e., dance, entrainment, etc.). Only recently, studies have started to devote attention to the reciprocal effects that people’s body movements may exert on how people perceive certain aspects of music and sound (e.g., pitch, meter, musical preference, etc.). The present study positions itself in this line of research. The central research question is whether expressive body movements, which are systematically paired with music, can modulate children’s perception of musical expressiveness. We present a behavioral experiment in which different groups of children (7–8 years, N = 46) either repetitively performed a happy or a sad choreography in response to expressively ambiguous music or merely listened to that music. The results of our study show indeed that children’s perception of musical expressiveness is modulated in accordance with the expressive character of the dance choreography performed to the music. This finding supports theories that claim a strong connection between action and perception, although further research is needed to uncover the details of this connection.
This paper examines the idea that attraction to music is generated at a cognitive level through the formation and activation of networks of interlinked “nodes.” Although the networks involved are vast, the basic mechanism for activating the links is relatively simple. Two comprehensive cognitive-behavioral models of musical engagement are examined with the aim of identifying the underlying cognitive mechanisms and processes involved in musical experience. A “dynamical minimalism” approach (after Nowak, 2004) is applied to re-interpret musical engagement (listening, performing, composing, or imagining any of these) and to revise the latest version of the reciprocal-feedback model (RFM) of music processing. Specifically, a single cognitive mechanism of “spreading activation” through previously associated networks is proposed as a pleasurable outcome of musical engagement. This mechanism underlies the dynamic interaction of the various components of the RFM, and can thereby explain the generation of positive affects in the listener’s musical experience. This includes determinants of that experience stemming from the characteristics of the individual engaging in the musical activity (whether listener, composer, improviser, or performer), the situation and contexts (e.g., social factors), and the music (e.g., genre, structural features). The theory calls for new directions for future research, two being (1) further investigation of the components of the RFM to better understand musical experience and (2) more rigorous scrutiny of common findings about the salience of familiarity in musical experience and preference.
musical experience; cognitive model; reciprocal-feedback model; spreading activation; mind-body; neural networks; preference; familiarity
Moving to music is an essential human pleasure particularly related to musical groove. Structurally, music associated with groove is often characterised by rhythmic complexity in the form of syncopation, frequently observed in musical styles such as funk, hip-hop and electronic dance music. Structural complexity has been related to positive affect in music more broadly, but the function of syncopation in eliciting pleasure and body-movement in groove is unknown. Here we report results from a web-based survey which investigated the relationship between syncopation and ratings of wanting to move and experienced pleasure. Participants heard funk drum-breaks with varying degrees of syncopation and audio entropy, and rated the extent to which the drum-breaks made them want to move and how much pleasure they experienced. While entropy was found to be a poor predictor of wanting to move and pleasure, the results showed that medium degrees of syncopation elicited the most desire to move and the most pleasure, particularly for participants who enjoy dancing to music. Hence, there is an inverted U-shaped relationship between syncopation, body-movement and pleasure, and syncopation seems to be an important structural factor in embodied and affective responses to groove.
Dancing and singing to music involve auditory-motor coordination and have been essential to our human culture since ancient times. Although scholars have been trying to understand the evolutionary and developmental origin of music, early human developmental manifestations of auditory-motor interactions in music have not been fully investigated. Here we report limb movements and vocalizations in three- to four-months-old infants while they listened to music and were in silence. In the group analysis, we found no significant increase in the amount of movement or in the relative power spectrum density around the musical tempo in the music condition compared to the silent condition. Intriguingly, however, there were two infants who demonstrated striking increases in the rhythmic movements via kicking or arm-waving around the musical tempo during listening to music. Monte-Carlo statistics with phase-randomized surrogate data revealed that the limb movements of these individuals were significantly synchronized to the musical beat. Moreover, we found a clear increase in the formant variability of vocalizations in the group during music perception. These results suggest that infants at this age are already primed with their bodies to interact with music via limb movements and vocalizations.
Entrainment has been studied in a variety of contexts including music perception, dance, verbal communication and motor coordination more generally. Here we seek to provide a unifying framework that incorporates the key aspects of entrainment as it has been studied in these varying domains. We propose that there are a number of types of entrainment that build upon pre-existing adaptations that allow organisms to perceive stimuli as rhythmic, to produce periodic stimuli, and to integrate the two using sensory feedback. We suggest that social entrainment is a special case of spatiotemporal coordination where the rhythmic signal originates from another individual. We use this framework to understand the function and evolutionary basis for coordinated rhythmic movement and to explore questions about the nature of entrainment in music and dance. The framework of entrainment presented here has a number of implications for the vocal learning hypothesis and other proposals for the evolution of coordinated rhythmic behavior across an array of species.
In all ages and countries, music and dance have constituted a central part in human culture and communication. Recently, vocal-learning animals such as parrots and elephants have been found to share rhythmic ability with humans. Thus, we investigated the rhythmic synchronization of budgerigars, a vocal-mimicking parrot species, under controlled conditions and a systematically designed experimental paradigm as a first step in understanding the evolution of musical entrainment. We trained eight budgerigars to perform isochronous tapping tasks in which they pecked a key to the rhythm of audio–visual metronome-like stimuli. The budgerigars showed evidence of entrainment to external stimuli over a wide range of tempos. They seemed to be inherently inclined to tap at fast tempos, which have a similar time scale to the rhythm of budgerigars' natural vocalizations. We suggest that vocal learning might have contributed to their performance, which resembled that of humans.
Is there something special about the way music communicates feelings? Theorists since Meyer (1956) have attempted to explain how music could stimulate varied and subtle affective experiences by violating learned expectancies, or by mimicking other forms of social interaction. Our proposal is that music speaks to the brain in its own language; it need not imitate any other form of communication. We review recent theoretical and empirical literature, which suggests that all conscious processes consist of dynamic neural events, produced by spatially dispersed processes in the physical brain. Intentional thought and affective experience arise as dynamical aspects of neural events taking place in multiple brain areas simultaneously. At any given moment, this content comprises a unified “scene” that is integrated into a dynamic core through synchrony of neuronal oscillations. We propose that (1) neurodynamic synchrony with musical stimuli gives rise to musical qualia including tonal and temporal expectancies, and that (2) music-synchronous responses couple into core neurodynamics, enabling music to directly modulate core affect. Expressive music performance, for example, may recruit rhythm-synchronous neural responses to support affective communication. We suggest that the dynamic relationship between musical expression and the experience of affect presents a unique opportunity for the study of emotional experience. This may help elucidate the neural mechanisms underlying arousal and valence, and offer a new approach to exploring the complex dynamics of the how and why of emotional experience.
neurodynamics; consciousness; affect; emotion; musical expectancy; oscillation; synchrony
This paper outlines a cognitive and comparative perspective on human rhythmic cognition that emphasizes a key distinction between pulse perception and meter perception. Pulse perception involves the extraction of a regular pulse or “tactus” from a stream of events. Meter perception involves grouping of events into hierarchical trees with differing levels of “strength”, or perceptual prominence. I argue that metrically-structured rhythms are required to either perform or move appropriately to music (e.g., to dance). Rhythms, from this metrical perspective, constitute “trees in time.” Rhythmic syntax represents a neglected form of musical syntax, and warrants more thorough neuroscientific investigation. The recent literature on animal entrainment clearly demonstrates the capacity to extract the pulse from rhythmic music, and to entrain periodic movements to this pulse, in several parrot species and a California sea lion, and a more limited ability to do so in one chimpanzee. However, the ability of these or other species to infer hierarchical rhythmic trees remains, for the most part, unexplored (with some apparent negative results from macaques). The results from this animal comparative research, combined with new methods to explore rhythmic cognition neurally, provide exciting new routes for understanding not just rhythmic cognition, but hierarchical cognition more generally, from a biological and neural perspective.
rhythm; meter; music cognition; cognitive biology; comparative cognition; hierarchy
Music is a cross-cultural universal, a ubiquitous activity found in every known human culture. Individuals demonstrate manifestly different preferences in music, and yet relatively little is known about the underlying structure of those preferences. Here, we introduce a model of musical preferences based on listeners’ affective reactions to excerpts of music from a wide variety of musical genres. The findings from three independent studies converged to suggest that there exists a latent five-factor structure underlying music preferences that is genre-free, and reflects primarily emotional/affective responses to music. We have interpreted and labeled these factors as: 1) a Mellow factor comprising smooth and relaxing styles; 2) an Urban factor defined largely by rhythmic and percussive music, such as is found in rap, funk, and acid jazz; 3) a Sophisticated factor that includes classical, operatic, world, and jazz; 4) an Intense factor defined by loud, forceful, and energetic music; and 5) a Campestral factor comprising a variety of different styles of direct, and rootsy music such as is often found in country and singer-songwriter genres. The findings from a fourth study suggest that preferences for the MUSIC factors are affected by both the social and auditory characteristics of the music.
MUSIC; PREFERENCES; INDIVIDUAL DIFFERENCES; FACTOR ANALYSIS
Rhythms are an essential characteristic of our lives, and auditory-motor coupling affects a variety of behaviors. Previous research has shown that the neural regions associated with motor system processing are coupled to perceptual rhythmic and melodic processing such that the perception of rhythmic stimuli can entrain motor system responses. However, the degree to which individual preference modulates the motor system is unknown. Recent work has shown that passively listening to metrically strong rhythms increases corticospinal excitability, as indicated by transcranial magnetic stimulation (TMS). Furthermore, this effect is modulated by high-groove music, or music that inspires movement, while neuroimaging evidence suggests that premotor activity increases with tempos occurring within a preferred tempo (PT) category. PT refers to the rate of a hypothetical endogenous oscillator that may be indicated by spontaneous motor tempo (SMT) and preferred perceptual tempo (PPT) measurements. The present study investigated whether listening to a rhythm at an individual’s PT preferentially modulates motor system excitability. SMT was obtained in human participants through a tapping task in which subjects were asked to tap a response key at their most comfortable rate. Subjects listened a 10-beat tone sequence at 11 log-spaced tempos and rated their preference for each (PPT). We found that SMT and PPT measurements were correlated, indicating that preferred and produced tempos occurred at a similar rate. Crucially, single-pulse TMS delivered to left M1 during PPT judgments revealed that corticospinal excitability, measured by motor-evoked potentials (MEPs), was modulated by tempos traveling closer to individual PT. However, the specific nature of this modulation differed across individuals, with some exhibiting an increase in excitability around PT and others exhibiting a decrease. These findings suggest that auditory-motor coupling induced by rhythms is preferentially modulated by rhythms occurring at a preferred rate, and that individual differences can alter the nature of this coupling.
transcranial magnetic stimulation; rhythm perception; tempo and timing; corticospinal excitability; individual differences
Individuals with autism show impairments in emotional tuning, social interactions and communication. These are functions that have been attributed to the putative human mirror neuron system (MNS), which contains neurons that respond to the actions of self and others. It has been proposed that a dysfunction of that system underlies some of the characteristics of autism. Here, we review behavioral and imaging studies that implicate the MNS (or a brain network with similar functions) in sensory-motor integration and speech representation, and review data supporting the hypothesis that MNS activity could be abnormal in autism. In addition, we propose that an intervention designed to engage brain regions that overlap with the MNS may have significant clinical potential. We argue that this engagement could be achieved through forms of music making. Music making with others (e.g., playing instruments or singing) is a multi-modal activity that has been shown to engage brain regions that largely overlap with the human MNS. Furthermore, many children with autism thoroughly enjoy participating in musical activities. Such activities may enhance their ability to focus and interact with others, thereby fostering the development of communication and social skills. Thus, interventions incorporating methods of music making may offer a promising approach for facilitating expressive language in otherwise nonverbal children with autism.
Autism; Music; Language; Brain; Mirror neuron system; Auditory-motor mapping training
I defend a model of the musically extended mind. I consider how acts of “musicking” grant access to novel emotional experiences otherwise inaccessible. First, I discuss the idea of “musical affordances” and specify both what musical affordances are and how they invite different forms of entrainment. Next, I argue that musical affordances – via soliciting different forms of entrainment – enhance the functionality of various endogenous, emotion-granting regulative processes, drawing novel experiences out of us with an expanded complexity and phenomenal character. I argue that music therefore ought to be thought of as part of the vehicle needed to realize these emotional experiences. I appeal to different sources of empirical work to develop this idea.
music; affordances; extended cognition; emotions; emotion regulation; phenomenology
Assessment of awareness for those with disorders of consciousness is a challenging undertaking, due to the complex presentation of the population. Debate surrounds whether behavioral assessments provide greatest accuracy in diagnosis compared to neuro-imaging methods, and despite developments in both, misdiagnosis rates remain high. Music therapy may be effective in the assessment and rehabilitation with this population due to effects of musical stimuli on arousal, attention, and emotion, irrespective of verbal or motor deficits. However, an evidence base is lacking as to which procedures are most effective. To address this, a neurophysiological and behavioral study was undertaken comparing electroencephalogram (EEG), heart rate variability, respiration, and behavioral responses of 20 healthy subjects with 21 individuals in vegetative or minimally conscious states (VS or MCS). Subjects were presented with live preferred music and improvised music entrained to respiration (procedures typically used in music therapy), recordings of disliked music, white noise, and silence. ANOVA tests indicated a range of significant responses (p ≤ 0.05) across healthy subjects corresponding to arousal and attention in response to preferred music including concurrent increases in respiration rate with globally enhanced EEG power spectra responses (p = 0.05–0.0001) across frequency bandwidths. Whilst physiological responses were heterogeneous across patient cohorts, significant post hoc EEG amplitude increases for stimuli associated with preferred music were found for frontal midline theta in six VS and four MCS subjects, and frontal alpha in three VS and four MCS subjects (p = 0.05–0.0001). Furthermore, behavioral data showed a significantly increased blink rate for preferred music (p = 0.029) within the VS cohort. Two VS cases are presented with concurrent changes (p ≤ 0.05) across measures indicative of discriminatory responses to both music therapy procedures. A third MCS case study is presented highlighting how more sensitive selective attention may distinguish MCS from VS. The findings suggest that further investigation is warranted to explore the use of music therapy for prognostic indicators, and its potential to support neuroplasticity in rehabilitation programs.
EEG; music therapy; disorders of consciousness; assessment; diagnosis; brain injury; vegetative state; minimally conscious state
Young children regularly engage in musical activities, but the effects of early music education on children's cognitive development are unknown. While some studies have found associations between musical training in childhood and later nonmusical cognitive outcomes, few randomized controlled trials (RCTs) have been employed to assess causal effects of music lessons on child cognition and no clear pattern of results has emerged. We conducted two RCTs with preschool children investigating the cognitive effects of a brief series of music classes, as compared to a similar but non-musical form of arts instruction (visual arts classes, Experiment 1) or to a no-treatment control (Experiment 2). Consistent with typical preschool arts enrichment programs, parents attended classes with their children, participating in a variety of developmentally appropriate arts activities. After six weeks of class, we assessed children's skills in four distinct cognitive areas in which older arts-trained students have been reported to excel: spatial-navigational reasoning, visual form analysis, numerical discrimination, and receptive vocabulary. We initially found that children from the music class showed greater spatial-navigational ability than did children from the visual arts class, while children from the visual arts class showed greater visual form analysis ability than children from the music class (Experiment 1). However, a partial replication attempt comparing music training to a no-treatment control failed to confirm these findings (Experiment 2), and the combined results of the two experiments were negative: overall, children provided with music classes performed no better than those with visual arts or no classes on any assessment. Our findings underscore the need for replication in RCTs, and suggest caution in interpreting the positive findings from past studies of cognitive effects of music instruction.
Musical emotions, such as happiness and sadness, have been investigated using instrumental music devoid of linguistic content. However, pop and rock, the most common musical genres, utilize lyrics for conveying emotions. Using participants’ self-selected musical excerpts, we studied their behavior and brain responses to elucidate how lyrics interact with musical emotion processing, as reflected by emotion recognition and activation of limbic areas involved in affective experience. We extracted samples from subjects’ selections of sad and happy pieces and sorted them according to the presence of lyrics. Acoustic feature analysis showed that music with lyrics differed from music without lyrics in spectral centroid, a feature related to perceptual brightness, whereas sad music with lyrics did not diverge from happy music without lyrics, indicating the role of other factors in emotion classification. Behavioral ratings revealed that happy music without lyrics induced stronger positive emotions than happy music with lyrics. We also acquired functional magnetic resonance imaging data while subjects performed affective tasks regarding the music. First, using ecological and acoustically variable stimuli, we broadened previous findings about the brain processing of musical emotions and of songs versus instrumental music. Additionally, contrasts between sad music with versus without lyrics recruited the parahippocampal gyrus, the amygdala, the claustrum, the putamen, the precentral gyrus, the medial and inferior frontal gyri (including Broca’s area), and the auditory cortex, while the reverse contrast produced no activations. Happy music without lyrics activated structures of the limbic system and the right pars opercularis of the inferior frontal gyrus, whereas auditory regions alone responded to happy music with lyrics. These findings point to the role of acoustic cues for the experience of happiness in music and to the importance of lyrics for sad musical emotions.
music; emotion; fMRI; limbic system; language; acoustic feature
The purpose of this paper is to show some aspects of music therapy application in cancer care and to present the integration of music therapy program into a continuous supportive cancer care for inpatients. A cancer diagnosis is one of the most feared and serious life events that causes stress in individuals and families. Cancer disrupts social, physical and emotional well-being and results in a range of emotions, including anger, fear, sadness, guilt, embarrassment and shame. Music therapy is a part of a complementary medicine program in supportive cancer care which accompanies medical treatment. There are many benefits of music therapy for cancer patients—interactive music therapy techniques (instrumental improvisation, singing) as well as receptive music therapy techniques (listening to recorded or live music, music and imaginary) can be used to improve mood, decrease stress, pain, anxiety level and enhance relaxation. Music therapy is an effective form of supporting cancer care for patients during the treatment process. It may be also basic for planning effective programs of rehabilitation to promote wellness, improve physical and emotional well-being and the quality of life.
Cancer care; Music therapy; Oncology
Previous research has shown that the matching of rhythmic behaviour between individuals (synchrony) increases cooperation. Such synchrony is most noticeable in music, dance and collective rituals. As well as the matching of behaviour, such collective performances typically involve shared intentionality: performers actively collaborate to produce joint actions. Over three experiments we examined the importance of shared intentionality in promoting cooperation from group synchrony. Experiment 1 compared a condition in which group synchrony was produced through shared intentionality to conditions in which synchrony or asynchrony were created as a by-product of hearing the same or different rhythmic beats. We found that synchrony combined with shared intentionality produced the greatest level of cooperation. To examinef the importance of synchrony when shared intentionality is present, Experiment 2 compared a condition in which participants deliberately worked together to produce synchrony with a condition in which participants deliberately worked together to produce asynchrony. We found that synchrony combined with shared intentionality produced the greatest level of cooperation. Experiment 3 manipulated both the presence of synchrony and shared intentionality and found significantly greater cooperation with synchrony and shared intentionality combined. Path analysis supported a reinforcement of cooperation model according to which perceiving synchrony when there is a shared goal to produce synchrony provides immediate feedback for successful cooperation so reinforcing the group’s cooperative tendencies. The reinforcement of cooperation model helps to explain the evolutionary conservation of traditional music and dance performances, and furthermore suggests that the collectivist values of such cultures may be an essential part of the mechanisms by which synchrony galvanises cooperative behaviours.
As the main interhemispheric fiber tract, the corpus callosum (CC) is of particular importance for musicians who simultaneously engage parts of both hemispheres to process and play music. Professional musicians who began music training before the age of 7 years have larger anterior CC areas than do nonmusicians, which suggests that plasticity due to music training may occur in the CC during early childhood. However, no study has yet demonstrated that the increased CC area found in musicians is due to music training rather than to preexisting differences. We tested the hypothesis that approximately 29 months of instrumental music training would cause a significant increase in the size of particular subareas of the CC known to have fibers that connect motor-related areas of both hemispheres. On the basis of total weekly practice time, a sample of 31 children aged 5–7 was divided into three groups: high-practicing, low-practicing, and controls. No CC size differences were seen at base line, but differences emerged after an average of 29 months of observation in the high-practicing group in the anterior midbody of the CC (which connects premotor and supplementary motor areas of the two hemispheres). Total weekly music exposure predicted degree of change in this subregion of the CC as well as improvement on a motor-sequencing task. Our results show that it is intense musical experience/practice, not preexisting differences, that is responsible for the larger anterior CC area found in professional adult musicians.
music; brain; corpus callosum; children; plasticity
The influence of formal musical training on auditory cognition has been well established. For the majority of children, however, musical experience does not primarily consist of adult-guided training on a musical instrument. Instead, young children mostly engage in everyday musical activities such as singing and musical play. Here, we review recent electrophysiological and behavioral studies carried out in our laboratory and elsewhere which have begun to map how developing auditory skills are shaped by such informal musical activities both at home and in playschool-type settings. Although more research is still needed, the evidence emerging from these studies suggests that, in addition to formal musical training, informal musical activities can also influence the maturation of auditory discrimination and attention in preschool-aged children.
music; brain development; event-related potential; training; auditory perception; informal musical activities
Every human culture has some form of music with a beat: a perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This “action simulation for auditory prediction” (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in non-human primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.
evolution; rhythm perception; brain; music cognition; comparative psychology
Absolute pitch (AP) is a form of sound recognition in which musical note names are associated with discrete musical pitch categories. The accuracy of pitch matching by non-AP musicians for chords has recently been shown to depend on stimulus familiarity, pointing to a role of spectral recognition mechanisms in the early stages of pitch processing. Here we show that pitch matching accuracy by AP musicians was also dependent on their familiarity with the chord stimulus. This suggests that the pitch matching abilities of both AP and non-AP musicians for concurrently presented pitches are dependent on initial recognition of the chord. The dual mechanism model of pitch perception previously proposed by the authors suggests that spectral processing associated with sound recognition primes waveform processing to extract stimulus periodicity and refine pitch perception. The findings presented in this paper are consistent with the dual mechanism model of pitch, and in the case of AP musicians, the formation of nominal pitch categories based on both spectral and periodicity information.
pitch; absolute pitch; concurrent pitch; neurocognitive model; recognition
Environmental prevention strategies in club settings where music and dance events are featured could provide an important new arena for the prevention of drug use and other risky behaviors (e.g., sexual risk taking, intoxication and drug use, aggression, and driving under the influence). Electronic music dance events (EMDEs) occur in clubs that attract young, emerging adults (18–25 years of age) and attract individuals who engage in various types of drug use. Borrowing from the environmental prevention studies that focus on reducing alcohol use and related problems, a model for drug prevention in the club setting is proposed. Initially, an overview of the relationships between EMDEs and drug use and other risky behaviors are presented. Next, rationales for environmental strategies are provided. Finally, an environmental approach to prevention of drug use and risky behaviors in clubs is described. This comprehensive set of environmental strategies, is designed to be mutually supportive and interactive. Environmental strategies are believed to provide potential for developing an efficacious prevention strategy. The environmental prevention approach presented here is composed of three intervention domains: (1) Mobilization, (2) Strategies for the Exterior Environment, and (3) Strategies for the Interior Environment.
drugs; prevention; clubs
Expert ensemble musicians produce exquisitely coordinated sounds, but rehearsal is typically required to do so. Ensemble coordination may thus be influenced by the degree to which individuals are familiar with each other's parts. Such familiarity may affect the ability to predict and synchronize with co-performers' actions. Internal models related to action simulation and anticipatory musical imagery may be affected by knowledge of (1) the musical structure of a co-performer's part (e.g., in terms of its rhythm and phrase structure) and/or (2) the co-performer's idiosyncratic playing style (e.g., expressive micro-timing variations). The current study investigated the effects of familiarity on interpersonal coordination in piano duos. Skilled pianists were required to play several duets with different partners. One condition included duets for which co-performers had previously practiced both parts, while another condition included duets for which each performer had practiced only their own part. Each piece was recorded six times without joint rehearsal or visual contact to examine the effects of increasing familiarity. Interpersonal coordination was quantified by measuring asynchronies between pianists' keystroke timing and the correlation of their body (head and torso) movements, which were recorded with a motion capture system. The results suggest that familiarity with a co-performer's part, in the absence of familiarity with their playing style, engenders predictions about micro-timing variations that are based instead upon one's own playing style, leading to a mismatch between predictions and actual events at short timescales. Predictions at longer timescales—that is, those related to musical measures and phrases, and reflected in head movements and body sway—are, however, facilitated by familiarity with the structure of a co-performer's part. These findings point to a dissociation between interpersonal coordination at the level of keystrokes and body movements.
interpersonal coordination; body movement; music; ensembles; sensorimotor synchronization
Auditory processing in general and music perception in particular are hampered in adult cochlear implant (CI) users. To examine the residual music perception skills and their underlying neural correlates in CI users implanted in adolescence or adulthood, we conducted an electrophysiological and behavioral study comparing adult CI users with normal-hearing age-matched controls (NH controls). We used a newly developed musical multi-feature paradigm, which makes it possible to test automatic auditory discrimination of six different types of sound feature changes inserted within a musical enriched setting lasting only 20 min. The presentation of stimuli did not require the participants’ attention, allowing the study of the early automatic stage of feature processing in the auditory cortex. For the CI users, we obtained mismatch negativity (MMN) brain responses to five feature changes but not to changes of rhythm, whereas we obtained MMNs for all the feature changes in the NH controls. Furthermore, the MMNs to deviants of pitch of CI users were reduced in amplitude and later than those of NH controls for changes of pitch and guitar timber. No other group differences in MMN parameters were found to changes in intensity and saxophone timber. Furthermore, the MMNs in CI users reflected the behavioral scores from a respective discrimination task and were correlated with patients’ age and speech intelligibility. Our results suggest that even though CI users are not performing at the same level as NH controls in neural discrimination of pitch-based features, they do possess potential neural abilities for music processing. However, CI users showed a disrupted ability to automatically discriminate rhythmic changes compared with controls. The current behavioral and MMN findings highlight the residual neural skills for music processing even in CI users who have been implanted in adolescence or adulthood. Highlights:
-Automatic brain responses to musical feature changes reflect the limitations of central auditory processing in adult Cochlear Implant users.-The brains of adult CI users automatically process sound features changes even when inserted in a musical context.-CI users show disrupted automatic discriminatory abilities for rhythm in the brain.-Our fast paradigm demonstrate residual musical abilities in the brains of adult CI users giving hope for their future rehabilitation.
cochlear implant; auditory evoked potentials; mismatch negativity; music multi-feature paradigm; music perception
This paper proposes a method to translate human EEG into music, so as to represent mental state by music. The arousal levels of the brain mental state and music emotion are implicitly used as the bridge between the mind world and the music. The arousal level of the brain is based on the EEG features extracted mainly by wavelet analysis, and the music arousal level is related to the musical parameters such as pitch, tempo, rhythm, and tonality. While composing, some music principles (harmonics and structure) were taken into consideration. With EEGs during various sleep stages as an example, the music generated from them had different patterns of pitch, rhythm, and tonality. 35 volunteers listened to the music pieces, and significant difference in music arousal levels was found. It implied that different mental states may be identified by the corresponding music, and so the music from EEG may be a potential tool for EEG monitoring, biofeedback therapy, and so forth.