Search tips
Search criteria

Results 1-25 (1137353)

Clipboard (0)

Related Articles

1.  Doing Duo – a case study of entrainment in William Forsythe’s choreography “Duo” 
Entrainment theory focuses on processes in which interacting (i.e., coupled) rhythmic systems stabilize, producing synchronization in the ideal sense, and forms of phase related rhythmic coordination in complex cases. In human action, entrainment involves spatiotemporal and social aspects, characterizing the meaningful activities of music, dance, and communication. How can the phenomenon of human entrainment be meaningfully studied in complex situations such as dance? We present an in-progress case study of entrainment in William Forsythe’s choreography Duo, a duet in which coordinated rhythmic activity is achieved without an external musical beat and without touch-based interaction. Using concepts of entrainment from different disciplines as well as insight from Duo performer Riley Watts, we question definitions of entrainment in the context of dance. The functions of chorusing, turn-taking, complementary action, cues, and alignments are discussed and linked to supporting annotated video material. While Duo challenges the definition of entrainment in dance as coordinated response to an external musical or rhythmic signal, it supports the definition of entrainment as coordinated interplay of motion and sound production by active agents (i.e., dancers) in the field. Agreeing that human entrainment should be studied on multiple levels, we suggest that entrainment between the dancers in Duo is elastic in time and propose how to test this hypothesis empirically. We do not claim that our proposed model of elasticity is applicable to all forms of human entrainment nor to all examples of entrainment in dance. Rather, we suggest studying higher order phase correction (the stabilizing tendency of entrainment) as a potential aspect to be incorporated into other models.
PMCID: PMC4204438  PMID: 25374522
entrainment; contemporary dance; choreography; multimodal communication; coordination; synchronization; joint action
2.  Chorusing, synchrony, and the evolutionary functions of rhythm 
Frontiers in Psychology  2014;5:1118.
A central goal of biomusicology is to understand the biological basis of human musicality. One approach to this problem has been to compare core components of human musicality (relative pitch perception, entrainment, etc.) with similar capacities in other animal species. Here we extend and clarify this comparative approach with respect to rhythm. First, whereas most comparisons between human music and animal acoustic behavior have focused on spectral properties (melody and harmony), we argue for the central importance of temporal properties, and propose that this domain is ripe for further comparative research. Second, whereas most rhythm research in non-human animals has examined animal timing in isolation, we consider how chorusing dynamics can shape individual timing, as in human music and dance, arguing that group behavior is key to understanding the adaptive functions of rhythm. To illustrate the interdependence between individual and chorusing dynamics, we present a computational model of chorusing agents relating individual call timing with synchronous group behavior. Third, we distinguish and clarify mechanistic and functional explanations of rhythmic phenomena, often conflated in the literature, arguing that this distinction is key for understanding the evolution of musicality. Fourth, we expand biomusicological discussions beyond the species typically considered, providing an overview of chorusing and rhythmic behavior across a broad range of taxa (orthopterans, fireflies, frogs, birds, and primates). Finally, we propose an “Evolving Signal Timing” hypothesis, suggesting that similarities between timing abilities in biological species will be based on comparable chorusing behaviors. We conclude that the comparative study of chorusing species can provide important insights into the adaptive function(s) of rhythmic behavior in our “proto-musical” primate ancestors, and thus inform our understanding of the biology and evolution of rhythm in human music and language.
PMCID: PMC4193405  PMID: 25346705
rhythm; synchronization; isochrony; chorusing; evolution of communication; music perception; coupled oscillators; timing
3.  The performative pleasure of imprecision: a diachronic study of entrainment in music performance 
This study focuses in on a moment of live performance in which the entrainment amongst a musical quartet is threatened. Entrainment is asymmetric in so far as there is an ensemble leader who improvises and expands the structure of a last chorus of a piece of music beyond the limits tacitly negotiated during prior rehearsals and performances. Despite the risk of entrainment being disturbed and performance interrupted, the other three musicians in the quartet follow the leading performer and smoothly transition into unprecedented performance territory. We use this moment of live performance to work back through the fieldwork data, building a diachronic study of the development and bases of entrainment in live music performance. We introduce the concept of entrainment and profile previous theory and research relevant to entrainment in music performance. After outlining our methodology, we trace the evolution of the structure of the piece of music from first rehearsal to final performance. Using video clip analysis, interviews and field notes we consider how entrainment shaped and was shaped by the moment of performance in focus. The sense of trust between quartet musicians is established through entrainment processes, is consolidated via smooth adaptation to the threats of disruption. Non-verbal communicative exchanges, via eye contact, gesture, and spatial proximity, sustain entrainment through phase shifts occurring swiftly and on the fly in performance contexts. These exchanges permit smooth adaptation promoting trust. This frees the quartet members to play with the potential disturbance of equilibrium inherent in entrained relationships and to play with this tension in an improvisatory way that enhances audience engagement and the live quality of performance.
PMCID: PMC4212675  PMID: 25400567
music performance; entrainment; improvisation; qualitative research; diachronic fieldwork
4.  Rhythmic complexity and predictive coding: a novel approach to modeling rhythm and meter perception in music 
Frontiers in Psychology  2014;5:1111.
Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding (PC) as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of PC, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a PC model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (“rhythm”) and the brain’s anticipatory structuring of music (“meter”). Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the PC theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms.
PMCID: PMC4181238  PMID: 25324813
rhythm; meter; rhythmic complexity; predictive coding; pleasure
5.  How musical training affects cognitive development: rhythm, reward and other modulating variables 
Musical training has recently gained additional interest in education as increasing neuroscientific research demonstrates its positive effects on brain development. Neuroimaging revealed plastic changes in the brains of adult musicians but it is still unclear to what extent they are the product of intensive music training rather than of other factors, such as preexisting biological markers of musicality. In this review, we synthesize a large body of studies demonstrating that benefits of musical training extend beyond the skills it directly aims to train and last well into adulthood. For example, children who undergo musical training have better verbal memory, second language pronunciation accuracy, reading ability and executive functions. Learning to play an instrument as a child may even predict academic performance and IQ in young adulthood. The degree of observed structural and functional adaptation in the brain correlates with intensity and duration of practice. Importantly, the effects on cognitive development depend on the timing of musical initiation due to sensitive periods during development, as well as on several other modulating variables. Notably, we point to motivation, reward and social context of musical education, which are important yet neglected factors affecting the long-term benefits of musical training. Further, we introduce the notion of rhythmic entrainment and suggest that it may represent a mechanism supporting learning and development of executive functions. It also hones temporal processing and orienting of attention in time that may underlie enhancements observed in reading and verbal memory. We conclude that musical training uniquely engenders near and far transfer effects, preparing a foundation for a range of skills, and thus fostering cognitive development.
PMCID: PMC3957486  PMID: 24672420
musical training; brain plasticity; developmental neuroscience; music education; rhythmic entrainment
6.  The Influence of Body Movements on Children’s Perception of Music with an Ambiguous Expressive Character 
PLoS ONE  2013;8(1):e54682.
The theory of embodied music cognition states that the perception and cognition of music is firmly, although not exclusively, linked to action patterns associated with that music. In this regard, the focus lies mostly on how music promotes certain action tendencies (i.e., dance, entrainment, etc.). Only recently, studies have started to devote attention to the reciprocal effects that people’s body movements may exert on how people perceive certain aspects of music and sound (e.g., pitch, meter, musical preference, etc.). The present study positions itself in this line of research. The central research question is whether expressive body movements, which are systematically paired with music, can modulate children’s perception of musical expressiveness. We present a behavioral experiment in which different groups of children (7–8 years, N = 46) either repetitively performed a happy or a sad choreography in response to expressively ambiguous music or merely listened to that music. The results of our study show indeed that children’s perception of musical expressiveness is modulated in accordance with the expressive character of the dance choreography performed to the music. This finding supports theories that claim a strong connection between action and perception, although further research is needed to uncover the details of this connection.
PMCID: PMC3554646  PMID: 23358805
7.  Dynamic musical communication of core affect 
Is there something special about the way music communicates feelings? Theorists since Meyer (1956) have attempted to explain how music could stimulate varied and subtle affective experiences by violating learned expectancies, or by mimicking other forms of social interaction. Our proposal is that music speaks to the brain in its own language; it need not imitate any other form of communication. We review recent theoretical and empirical literature, which suggests that all conscious processes consist of dynamic neural events, produced by spatially dispersed processes in the physical brain. Intentional thought and affective experience arise as dynamical aspects of neural events taking place in multiple brain areas simultaneously. At any given moment, this content comprises a unified “scene” that is integrated into a dynamic core through synchrony of neuronal oscillations. We propose that (1) neurodynamic synchrony with musical stimuli gives rise to musical qualia including tonal and temporal expectancies, and that (2) music-synchronous responses couple into core neurodynamics, enabling music to directly modulate core affect. Expressive music performance, for example, may recruit rhythm-synchronous neural responses to support affective communication. We suggest that the dynamic relationship between musical expression and the experience of affect presents a unique opportunity for the study of emotional experience. This may help elucidate the neural mechanisms underlying arousal and valence, and offer a new approach to exploring the complex dynamics of the how and why of emotional experience.
PMCID: PMC3956121  PMID: 24672492
neurodynamics; consciousness; affect; emotion;  musical expectancy; oscillation; synchrony
8.  Global music approach to persons with dementia: evidence and practice 
Music is an important resource for achieving psychological, cognitive, and social goals in the field of dementia. This paper describes the different types of evidence-based music interventions that can be found in literature and proposes a structured intervention model (global music approach to persons with dementia, GMA-D). The literature concerning music and dementia was considered and analyzed. The reported studies included more recent studies and/or studies with relevant scientific characteristics. From this background, a global music approach was proposed using music and sound–music elements according to the needs, clinical characteristics, and therapeutic–rehabilitation goals that emerge in the care of persons with dementia. From the literature analysis the following evidence-based interventions emerged: active music therapy (psychological and rehabilitative approaches), active music therapy with family caregivers and persons with dementia, music-based interventions, caregivers singing, individualized listening to music, and background music. Characteristics of each type of intervention are described and discussed. Standardizing the operational methods and evaluation of the single activities and a joint practice can contribute to achieve the validation of the application model. The proposed model can be considered a low-cost nonpharmacological intervention and a therapeutic–rehabilitation method for the reduction of behavioral disturbances, for stimulation of cognitive functions, and for increasing the overall quality of life of persons with dementia.
PMCID: PMC4199985  PMID: 25336931
music; music therapy; dementia; global music approach in dementia; evidence-based practice
9.  “Some like it hot”: spectators who score high on the personality trait openness enjoy the excitement of hearing dancers breathing without music 
Music is an integral part of dance. Over the last 10 years, however, dance stimuli (without music) have been repeatedly used to study action observation processes, increasing our understanding of the influence of observer’s physical abilities on action perception. Moreover, beyond trained skills and empathy traits, very little has been investigated on how other observer or spectators’ properties modulate action observation and action preference. Since strong correlations have been shown between music and personality traits, here we aim to investigate how personality traits shape the appreciation of dance when this is presented with three different music/sounds. Therefore, we investigated the relationship between personality traits and the subjective esthetic experience of 52 spectators watching a 24 min lasting contemporary dance performance projected on a big screen containing three movement phrases performed to three different sound scores: classical music (i.e., Bach), an electronic sound-score, and a section without music but where the breathing of the performers was audible. We found that first, spectators rated the experience of watching dance without music significantly different from with music. Second, we found that the higher spectators scored on the Big Five personality factor openness, the more they liked the no-music section. Third, spectators’ physical experience with dance was not linked to their appreciation but was significantly related to high average extravert scores. For the first time, we showed that spectators’ reported entrainment to watching dance movements without music is strongly related to their personality and thus may need to be considered when using dance as a means to investigate action observation processes and esthetic preferences.
PMCID: PMC4161163  PMID: 25309393
action observation; personality; music; esthetic appreciation; entrainment
10.  The ecology of entrainment: Foundations of coordinated rhythmic movement 
Music perception  2010;28(1):3-14.
Entrainment has been studied in a variety of contexts including music perception, dance, verbal communication and motor coordination more generally. Here we seek to provide a unifying framework that incorporates the key aspects of entrainment as it has been studied in these varying domains. We propose that there are a number of types of entrainment that build upon pre-existing adaptations that allow organisms to perceive stimuli as rhythmic, to produce periodic stimuli, and to integrate the two using sensory feedback. We suggest that social entrainment is a special case of spatiotemporal coordination where the rhythmic signal originates from another individual. We use this framework to understand the function and evolutionary basis for coordinated rhythmic movement and to explore questions about the nature of entrainment in music and dance. The framework of entrainment presented here has a number of implications for the vocal learning hypothesis and other proposals for the evolution of coordinated rhythmic behavior across an array of species.
PMCID: PMC3137907  PMID: 21776183
11.  Tempo and walking speed with music in the urban context 
Frontiers in Psychology  2014;5:1361.
The study explored the effect of music on the temporal aspects of walking behavior in a real outdoor urban setting. First, spontaneous synchronization between the beat of the music and step tempo was explored. The effect of motivational and non-motivational music (Karageorghis et al., 1999) on the walking speed was also studied. Finally, we investigated whether music can mask the effects of visual aspects of the walking route environment, which involve fluctuation of walking speed as a response to particular environmental settings. In two experiments, we asked participants to walk around an urban route that was 1.8 km in length through various environments in the downtown area of Hradec Králové. In Experiment 1, the participants listened to a musical track consisting of world pop music with a clear beat. In Experiment 2, participants were walking either with motivational music, which had a fast tempo and a strong rhythm, or with non-motivational music, which was slower, nice music, but with no strong implication to movement. Musical beat, as well as the sonic character of the music listened to while walking, influenced walking speed but did not lead to precise synchronization. It was found that many subjects did not spontaneously synchronize with the beat of the music at all, and some subjects synchronized only part of the time. The fast, energetic music increases the speed of the walking tempo, while slower, relaxing music makes the walking tempo slower. Further, it was found that listening to music with headphones while walking can mask the influence of the surrounding environment to some extent. Both motivational music and non-motivational music had a larger effect than the world pop music from Experiment 1. Individual differences in responses to the music listened to while walking that were linked to extraversion and neuroticism were also observed. The findings described here could be useful in rhythmic stimulation for enhancing or recovering the features of movement performance.
PMCID: PMC4251309  PMID: 25520682
walking speed; walking with music; spontaneous synchronization; motivational music; urban walk; auditory bubble; personality
12.  Animal signals and emotion in music: coordinating affect across groups 
Researchers studying the emotional impact of music have not traditionally been concerned with the principled relationship between form and function in evolved animal signals. The acoustic structure of musical forms is related in important ways to emotion perception, and thus research on non-human animal vocalizations is relevant for understanding emotion in music. Musical behavior occurs in cultural contexts that include many other coordinated activities which mark group identity, and can allow people to communicate within and between social alliances. The emotional impact of music might be best understood as a proximate mechanism serving an ultimately social function. Recent work reveals intimate connections between properties of certain animal signals and evocative aspects of human music, including (1) examinations of the role of nonlinearities (e.g., broadband noise) in non-human animal vocalizations, and the analogous production and perception of these features in human music, and (2) an analysis of group musical performances and possible relationships to non-human animal chorusing and emotional contagion effects. Communicative features in music are likely due primarily to evolutionary by-products of phylogenetically older, but still intact communication systems. But in some cases, such as the coordinated rhythmic sounds produced by groups of musicians, our appreciation and emotional engagement might be driven by an adaptive social signaling system. Future empirical work should examine human musical behavior through the comparative lens of behavioral ecology and an adaptationist cognitive science. By this view, particular coordinated sound combinations generated by musicians exploit evolved perceptual response biases – many shared across species – and proliferate through cultural evolutionary processes.
PMCID: PMC3872313  PMID: 24427146
emotion in music; arousal; nonlinearities; music distortion; coalition signaling
13.  Entrainment and motor emulation approaches to joint action: Alternatives or complementary approaches? 
Joint actions, such as music and dance, rely crucially on the ability of two, or more, agents to align their actions with great temporal precision. Within the literature that seeks to explain how this action alignment is possible, two broad approaches have appeared. The first, what we term the entrainment approach, has sought to explain these alignment phenomena in terms of the behavioral dynamics of the system of two agents. The second, what we term the emulator approach, has sought to explain these alignment phenomena in terms of mechanisms, such as forward and inverse models, that are implemented in the brain. They have often been pitched as alternative explanations of the same phenomena; however, we argue that this view is mistaken, because, as we show, these two approaches are engaged in distinct, and not mutually exclusive, explanatory tasks. While the entrainment approach seeks to uncover the general laws that govern behavior the emulator approach seeks to uncover mechanisms. We argue that is possible to do both and that the entrainment approach must pay greater attention to the mechanisms that support the behavioral dynamics of interest. In short, the entrainment approach must be transformed into a neuroentrainment approach by adopting a mechanistic view of explanation and by seeking mechanisms that are implemented in the brain.
PMCID: PMC4174887  PMID: 25309403
motor emulation; perception–action; entrainment; mechanistic explanation; joint action
14.  Human Neuromagnetic Steady-State Responses to Amplitude-Modulated Tones, Speech, and Music 
Ear and Hearing  2014;35(4):461-467.
Auditory steady-state responses that can be elicited by various periodic sounds inform about subcortical and early cortical auditory processing. Steady-state responses to amplitude-modulated pure tones have been used to scrutinize binaural interaction by frequency-tagging the two ears’ inputs at different frequencies. Unlike pure tones, speech and music are physically very complex, as they include many frequency components, pauses, and large temporal variations. To examine the utility of magnetoencephalographic (MEG) steady-state fields (SSFs) in the study of early cortical processing of complex natural sounds, the authors tested the extent to which amplitude-modulated speech and music can elicit reliable SSFs.
MEG responses were recorded to 90-s-long binaural tones, speech, and music, amplitude-modulated at 41.1 Hz at four different depths (25, 50, 75, and 100%). The subjects were 11 healthy, normal-hearing adults. MEG signals were averaged in phase with the modulation frequency, and the sources of the resulting SSFs were modeled by current dipoles. After the MEG recording, intelligibility of the speech, musical quality of the music stimuli, naturalness of music and speech stimuli, and the perceived deterioration caused by the modulation were evaluated on visual analog scales.
The perceived quality of the stimuli decreased as a function of increasing modulation depth, more strongly for music than speech; yet, all subjects considered the speech intelligible even at the 100% modulation. SSFs were the strongest to tones and the weakest to speech stimuli; the amplitudes increased with increasing modulation depth for all stimuli. SSFs to tones were reliably detectable at all modulation depths (in all subjects in the right hemisphere, in 9 subjects in the left hemisphere) and to music stimuli at 50 to 100% depths, whereas speech usually elicited clear SSFs only at 100% depth.
The hemispheric balance of SSFs was toward the right hemisphere for tones and speech, whereas SSFs to music showed no lateralization. In addition, the right lateralization of SSFs to the speech stimuli decreased with decreasing modulation depth.
The results showed that SSFs can be reliably measured to amplitude-modulated natural sounds, with slightly different hemispheric lateralization for different carrier sounds. With speech stimuli, modulation at 100% depth is required, whereas for music the 75% or even 50% modulation depths provide a reasonable compromise between the signal-to-noise ratio of SSFs and sound quality or perceptual requirements. SSF recordings thus seem feasible for assessing the early cortical processing of natural sounds.
Auditory steady state responses to pure tones have been used to study subcortical and cortical processing, to scrutinize binaural interaction, and to evaluate hearing in an objective way. In daily lives, sounds that are physically much more complex sounds are encountered, such as music and speech. This study demonstrates that not only pure tones but also amplitude-modulated speech and music, both perceived to have tolerable sound quality, can elicit reliable magnetoencephalographic steady state fields. The strengths and hemispheric lateralization of the responses differed between the carrier sounds. The results indicate that steady state responses could be used to study the early cortical processing of natural sounds.
PMCID: PMC4072443  PMID: 24603544
Amplitude modulation; Auditory; Frequency tagging; Magnetoencephalography; Natural stimuli
15.  Auditory-motor entrainment and phonological skills: precise auditory timing hypothesis (PATH) 
Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH) whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel, 2011, 2012, 2014). There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The PATH predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills.
PMCID: PMC4245894  PMID: 25505879
synchronization; auditory timing; phonological skills; musical training; reading
16.  Musical Aptitude Is Associated with AVPR1A-Haplotypes 
PLoS ONE  2009;4(5):e5534.
Artistic creativity forms the basis of music culture and music industry. Composing, improvising and arranging music are complex creative functions of the human brain, which biological value remains unknown. We hypothesized that practicing music is social communication that needs musical aptitude and even creativity in music. In order to understand the neurobiological basis of music in human evolution and communication we analyzed polymorphisms of the arginine vasopressin receptor 1A (AVPR1A), serotonin transporter (SLC6A4), catecol-O-methyltranferase (COMT), dopamin receptor D2 (DRD2) and tyrosine hydroxylase 1 (TPH1), genes associated with social bonding and cognitive functions in 19 Finnish families (n = 343 members) with professional musicians and/or active amateurs. All family members were tested for musical aptitude using the auditory structuring ability test (Karma Music test; KMT) and Carl Seashores tests for pitch (SP) and for time (ST). Data on creativity in music (composing, improvising and/or arranging music) was surveyed using a web-based questionnaire. Here we show for the first time that creative functions in music have a strong genetic component (h2 = .84; composing h2 = .40; arranging h2 = .46; improvising h2 = .62) in Finnish multigenerational families. We also show that high music test scores are significantly associated with creative functions in music (p<.0001). We discovered an overall haplotype association with AVPR1A gene (markers RS1 and RS3) and KMT (p = 0.0008; corrected p = 0.00002), SP (p = 0.0261; corrected p = 0.0072) and combined music test scores (COMB) (p = 0.0056; corrected p = 0.0006). AVPR1A haplotype AVR+RS1 further suggested a positive association with ST (p = 0.0038; corrected p = 0.00184) and COMB (p = 0.0083; corrected p = 0.0040) using haplotype-based association test HBAT. The results suggest that the neurobiology of music perception and production is likely to be related to the pathways affecting intrinsic attachment behavior.
PMCID: PMC2678260  PMID: 19461995
17.  Music and social bonding: “self-other” merging and neurohormonal mechanisms 
Frontiers in Psychology  2014;5:1096.
It has been suggested that a key function of music during its development and spread amongst human populations was its capacity to create and strengthen social bonds amongst interacting group members. However, the mechanisms by which this occurs have not been fully discussed. In this paper we review evidence supporting two thus far independently investigated mechanisms for this social bonding effect: self-other merging as a consequence of inter-personal synchrony, and the release of endorphins during exertive rhythmic activities including musical interaction. In general, self-other merging has been experimentally investigated using dyads, which provide limited insight into large-scale musical activities. Given that music can provide an external rhythmic framework that facilitates synchrony, explanations of social bonding during group musical activities should include reference to endorphins, which are released during synchronized exertive movements. Endorphins (and the endogenous opioid system (EOS) in general) are involved in social bonding across primate species, and are associated with a number of human social behaviors (e.g., laughter, synchronized sports), as well as musical activities (e.g., singing and dancing). Furthermore, passively listening to music engages the EOS, so here we suggest that both self-other merging and the EOS are important in the social bonding effects of music. In order to investigate possible interactions between these two mechanisms, future experiments should recreate ecologically valid examples of musical activities.
PMCID: PMC4179700  PMID: 25324805
music; rhythm; social bonding; endorphins; self-other merging; synchrony
18.  Residual Neural Processing of Musical Sound Features in Adult Cochlear Implant Users 
Auditory processing in general and music perception in particular are hampered in adult cochlear implant (CI) users. To examine the residual music perception skills and their underlying neural correlates in CI users implanted in adolescence or adulthood, we conducted an electrophysiological and behavioral study comparing adult CI users with normal-hearing age-matched controls (NH controls). We used a newly developed musical multi-feature paradigm, which makes it possible to test automatic auditory discrimination of six different types of sound feature changes inserted within a musical enriched setting lasting only 20 min. The presentation of stimuli did not require the participants’ attention, allowing the study of the early automatic stage of feature processing in the auditory cortex. For the CI users, we obtained mismatch negativity (MMN) brain responses to five feature changes but not to changes of rhythm, whereas we obtained MMNs for all the feature changes in the NH controls. Furthermore, the MMNs to deviants of pitch of CI users were reduced in amplitude and later than those of NH controls for changes of pitch and guitar timber. No other group differences in MMN parameters were found to changes in intensity and saxophone timber. Furthermore, the MMNs in CI users reflected the behavioral scores from a respective discrimination task and were correlated with patients’ age and speech intelligibility. Our results suggest that even though CI users are not performing at the same level as NH controls in neural discrimination of pitch-based features, they do possess potential neural abilities for music processing. However, CI users showed a disrupted ability to automatically discriminate rhythmic changes compared with controls. The current behavioral and MMN findings highlight the residual neural skills for music processing even in CI users who have been implanted in adolescence or adulthood. Highlights: -Automatic brain responses to musical feature changes reflect the limitations of central auditory processing in adult Cochlear Implant users.-The brains of adult CI users automatically process sound features changes even when inserted in a musical context.-CI users show disrupted automatic discriminatory abilities for rhythm in the brain.-Our fast paradigm demonstrate residual musical abilities in the brains of adult CI users giving hope for their future rehabilitation.
PMCID: PMC3982066  PMID: 24772074
cochlear implant; auditory evoked potentials; mismatch negativity; music multi-feature paradigm; music perception
19.  Effects of music therapy in the treatment of children with delayed speech development - results of a pilot study 
Language development is one of the most significant processes of early childhood development. Children with delayed speech development are more at risk of acquiring other cognitive, social-emotional, and school-related problems. Music therapy appears to facilitate speech development in children, even within a short period of time. The aim of this pilot study is to explore the effects of music therapy in children with delayed speech development.
A total of 18 children aged 3.5 to 6 years with delayed speech development took part in this observational study in which music therapy and no treatment were compared to demonstrate effectiveness. Individual music therapy was provided on an outpatient basis. An ABAB reversal design with alternations between music therapy and no treatment with an interval of approximately eight weeks between the blocks was chosen. Before and after each study period, a speech development test, a non-verbal intelligence test for children, and music therapy assessment scales were used to evaluate the speech development of the children.
Compared to the baseline, we found a positive development in the study group after receiving music therapy. Both phonological capacity and the children's understanding of speech increased under treatment, as well as their cognitive structures, action patterns, and level of intelligence. Throughout the study period, developmental age converged with their biological age. Ratings according to the Nordoff-Robbins scales showed clinically significant changes in the children, namely in the areas of client-therapist relationship and communication.
This study suggests that music therapy may have a measurable effect on the speech development of children through the treatment's interactions with fundamental aspects of speech development, including the ability to form and maintain relationships and prosodic abilities. Thus, music therapy may provide a basic and supportive therapy for children with delayed speech development. Further studies should be conducted to investigate the mechanisms of these interactions in greater depth.
Trial registration
The trial is registered in the German clinical trials register; Trial-No.: DRKS00000343
PMCID: PMC2921108  PMID: 20663139
20.  Coupling governs entrainment range of circadian clocks 
Circadian clock oscillator properties that are crucial for synchronization with the environment (entrainment) are studied in experiment and theory.The ratio between stimulus (zeitgeber) strength and oscillator amplitude, and the rigidity of the oscillatory system (relaxation rate upon perturbation) determine entrainment properties. Coupling among oscillators affects both qualities resulting in increased amplitude and rigidity.Uncoupled lung clocks entrain to extreme zeitgeber cycles, whereas the coupled oscillator system in the suprachiasmatic nucleus (SCN) does not; however, when coupling in the SCN is inhibited, larger ranges of entrainment can be achieved.
Daily rhythms in physiology, metabolism and behavior are controlled by an endogenous circadian timing system, which has evolved to synchronize an organism to periodically recurring environmental conditions, such as light–dark or temperature cycles. In mammals, the circadian system relies on cell-autonomous oscillators residing in almost every cell of the body. Cells of the SCN in the anterior hypothalamus are able to generate precise, long-lasting self-sustained circadian oscillations, which drive most rhythmic behavioral and physiological outputs, and which are believed to originate from the fact that the SCN tissue consists of tightly coupled cells (Aton and Herzog, 2005). In contrast, peripheral oscillators, such as lung tissue, exhibit seemingly damped and usually less precise oscillations, which are thought to be brought about by the lack of intercellular coupling.
Precise synchronization of these rhythms within the organism, but also with the environment (so-called entrainment), is an essential part of circadian organization. Entrainment is one of the cornerstones of circadian biology (Roenneberg et al, 2003). In evolution, the phase of a rhythmic variable is selective rather than its endogenous period. Thus, the synchronization of endogenous rhythms to zeitgeber cycles of the environment (resulting in a specific phase of entrainment) is fundamental for the adaptive value of circadian clocks. In this study, we systematically investigated the properties of circadian oscillators that are essential for entrainment behavior and describe coupling as a primary determinant.
As an experimental starting point of this study, we found that the circadian oscillators of lung tissue have a larger range of entrainment than SCN tissue—they readily entrained to extreme experimental temperature cycle of 20 or 28 h, whereas SCN tissue did not (Figure 4). For this purpose, we cultured SCN and lung slices derived from mice that express luciferase as fusion protein together with the clock protein PERIOD2 (Yoo et al, 2004). The detection of luciferase-driven bioluminescence allowed us to follow molecular clock gene activity in real-time over the course of several days.
In theoretical analyses, we show that both the ratio of amplitude and zeitgeber strength and, importantly, inter-oscillator coupling are major determinants for entrainment. The reason for coupling being critical is twofold: (i) Coupling makes an oscillatory system more rigid, i.e., it relaxes faster in response to a perturbation, and (ii) coupling increases the amplitude of the oscillatory system. Both of these consequences of coupling lead to a smaller entrainment range, because zeitgeber stimuli affect the oscillatory system less if the relaxation is fast and the amplitude is high (Figure 1). From these theoretical considerations, we conclude that the lung clock probably constitutes a weak oscillatory system, likely because a lack in coupling leads to a slow amplitude relaxation. (Circadian amplitude is not particularly low in lung (Figure 4).) In contrast, the SCN constitutes a rigid oscillator, whereby coupling and its described consequences probably are the primary causes for this rigidity. We then tested these theoretical predictions by experimentally perturbing coupling in the SCN (with MDL and TTX; O'Neill et al, 2008; Yamaguchi et al, 2003) and find that, indeed, reducing the coupling weakens the circadian oscillatory system in the SCN, which results in an enlargement of the entrainment range (Figure 6).
Why is the SCN designed to be a stronger circadian oscillator than peripheral organs? We speculate that the position of the SCN—as the tissue that conveys environmental timing information (i.e., light) to the rest of the body—makes it necessary to create a circadian clock that is robust against noisy environmental stimuli. The SCN oscillator needs to be robust enough to be protected from environmental noise, but flexible enough to fulfill its function as an entrainable clock even in extreme photoperiods (i.e., seasons). By the same token, peripheral clocks are more protected from the environmental zeitgebers due to intrinsic homeostatic mechanisms. Thus, they do not necessarily need to develop a strong oscillatory system (e.g., by strengthening the coupling), rather they need to stay flexible enough to respond to direct or indirect signals from the SCN, such as hormonal, neural, temperature or metabolic signals. Such a design ensures that only robust and persistent environmental signals trigger an SCN resetting response, while SCN signals can relatively easily be conveyed to the rest of the body. Thus, the robustness in the SCN clock likely serves as a filter for environmental noise.
In summary, using a combination of simulation studies, analytical calculations and experiments, we uncovered critical features for entrainment, such as zeitgeber-to-amplitude ratio and amplitude relaxation rate. Coupling is a primary factor that governs these features explaining important differences in the design of SCN and peripheral oscillators that ensure a robust, but also flexible circadian system.
Circadian clocks are endogenous oscillators driving daily rhythms in physiology and behavior. Synchronization of these timers to environmental light–dark cycles (‘entrainment') is crucial for an organism's fitness. Little is known about which oscillator qualities determine entrainment, i.e., entrainment range, phase and amplitude. In a systematic theoretical and experimental study, we uncovered these qualities for circadian oscillators in the suprachiasmatic nucleus (SCN—the master clock in mammals) and the lung (a peripheral clock): (i) the ratio between stimulus (zeitgeber) strength and oscillator amplitude and (ii) the rigidity of the oscillatory system (relaxation rate upon perturbation) determine entrainment properties. Coupling among oscillators affects both qualities resulting in increased amplitude and rigidity. These principles explain our experimental findings that lung clocks entrain to extreme zeitgeber cycles, whereas SCN clocks do not. We confirmed our theoretical predictions by showing that pharmacological inhibition of coupling in the SCN leads to larger ranges of entrainment. These differences between master and the peripheral clocks suggest that coupling-induced rigidity in the SCN filters environmental noise to create a robust circadian system.
PMCID: PMC3010105  PMID: 21119632
circadian clock; coupling; entrainment; mathematical modeling; oscillator
21.  Assessment of rhythmic entrainment at multiple timescales in dyslexia: Evidence for disruption to syllable timing☆ 
Hearing Research  2014;308(100):141-161.
Developmental dyslexia is associated with rhythmic difficulties, including impaired perception of beat patterns in music and prosodic stress patterns in speech. Spoken prosodic rhythm is cued by slow (<10 Hz) fluctuations in speech signal amplitude. Impaired neural oscillatory tracking of these slow amplitude modulation (AM) patterns is one plausible source of impaired rhythm tracking in dyslexia. Here, we characterise the temporal profile of the dyslexic rhythm deficit by examining rhythmic entrainment at multiple speech timescales. Adult dyslexic participants completed two experiments aimed at testing the perception and production of speech rhythm. In the perception task, participants tapped along to the beat of 4 metrically-regular nursery rhyme sentences. In the production task, participants produced the same 4 sentences in time to a metronome beat. Rhythmic entrainment was assessed using both traditional rhythmic indices and a novel AM-based measure, which utilised 3 dominant AM timescales in the speech signal each associated with a different phonological grain-sized unit (0.9–2.5 Hz, prosodic stress; 2.5–12 Hz, syllables; 12–40 Hz, phonemes). The AM-based measure revealed atypical rhythmic entrainment by dyslexic participants to syllable patterns in speech, in perception and production. In the perception task, both groups showed equally strong phase-locking to Syllable AM patterns, but dyslexic responses were entrained to a significantly earlier oscillatory phase angle than controls. In the production task, dyslexic utterances showed shorter syllable intervals, and differences in Syllable:Phoneme AM cross-frequency synchronisation. Our data support the view that rhythmic entrainment at slow (∼5 Hz, Syllable) rates is atypical in dyslexia, suggesting that neural mechanisms for syllable perception and production may also be atypical. These syllable timing deficits could contribute to the atypical development of phonological representations for spoken words, the central cognitive characteristic of developmental dyslexia across languages.
This article is part of a Special Issue entitled .
•Rhythmic entrainment at the syllable timescale is disrupted in dyslexia.•Both syllable perception and production are atypical.•Syllable timing deficits could contribute to dyslexics' atypical phonology.•New AM-based methodology for measuring rhythmic entrainment is introduced.
PMCID: PMC3969307  PMID: 23916752
22.  Quantifying Auditory Temporal Stability in a Large Database of Recorded Music 
PLoS ONE  2014;9(12):e110452.
“Moving to the beat” is both one of the most basic and one of the most profound means by which humans (and a few other species) interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical “energy”) in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training), exercise (e.g., jogging), or entertainment (e.g., continuous dance mixes). Although several such algorithms return simple point estimates of an audio file’s temporal structure (e.g., “average tempo”, “time signature”), none has sought to quantify the temporal stability of a series of detected beats. Such a method-a “Balanced Evaluation of Auditory Temporal Stability” (BEATS)–is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files). A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications.
PMCID: PMC4254286  PMID: 25469636
23.  Syncopation, Body-Movement and Pleasure in Groove Music 
PLoS ONE  2014;9(4):e94446.
Moving to music is an essential human pleasure particularly related to musical groove. Structurally, music associated with groove is often characterised by rhythmic complexity in the form of syncopation, frequently observed in musical styles such as funk, hip-hop and electronic dance music. Structural complexity has been related to positive affect in music more broadly, but the function of syncopation in eliciting pleasure and body-movement in groove is unknown. Here we report results from a web-based survey which investigated the relationship between syncopation and ratings of wanting to move and experienced pleasure. Participants heard funk drum-breaks with varying degrees of syncopation and audio entropy, and rated the extent to which the drum-breaks made them want to move and how much pleasure they experienced. While entropy was found to be a poor predictor of wanting to move and pleasure, the results showed that medium degrees of syncopation elicited the most desire to move and the most pleasure, particularly for participants who enjoy dancing to music. Hence, there is an inverted U-shaped relationship between syncopation, body-movement and pleasure, and syncopation seems to be an important structural factor in embodied and affective responses to groove.
PMCID: PMC3989225  PMID: 24740381
24.  Precursors of Dancing and Singing to Music in Three- to Four-Months-Old Infants 
PLoS ONE  2014;9(5):e97680.
Dancing and singing to music involve auditory-motor coordination and have been essential to our human culture since ancient times. Although scholars have been trying to understand the evolutionary and developmental origin of music, early human developmental manifestations of auditory-motor interactions in music have not been fully investigated. Here we report limb movements and vocalizations in three- to four-months-old infants while they listened to music and were in silence. In the group analysis, we found no significant increase in the amount of movement or in the relative power spectrum density around the musical tempo in the music condition compared to the silent condition. Intriguingly, however, there were two infants who demonstrated striking increases in the rhythmic movements via kicking or arm-waving around the musical tempo during listening to music. Monte-Carlo statistics with phase-randomized surrogate data revealed that the limb movements of these individuals were significantly synchronized to the musical beat. Moreover, we found a clear increase in the formant variability of vocalizations in the group during music perception. These results suggest that infants at this age are already primed with their bodies to interact with music via limb movements and vocalizations.
PMCID: PMC4023986  PMID: 24837135
25.  A Functional MRI Study of Happy and Sad Emotions in Music with and without Lyrics 
Musical emotions, such as happiness and sadness, have been investigated using instrumental music devoid of linguistic content. However, pop and rock, the most common musical genres, utilize lyrics for conveying emotions. Using participants’ self-selected musical excerpts, we studied their behavior and brain responses to elucidate how lyrics interact with musical emotion processing, as reflected by emotion recognition and activation of limbic areas involved in affective experience. We extracted samples from subjects’ selections of sad and happy pieces and sorted them according to the presence of lyrics. Acoustic feature analysis showed that music with lyrics differed from music without lyrics in spectral centroid, a feature related to perceptual brightness, whereas sad music with lyrics did not diverge from happy music without lyrics, indicating the role of other factors in emotion classification. Behavioral ratings revealed that happy music without lyrics induced stronger positive emotions than happy music with lyrics. We also acquired functional magnetic resonance imaging data while subjects performed affective tasks regarding the music. First, using ecological and acoustically variable stimuli, we broadened previous findings about the brain processing of musical emotions and of songs versus instrumental music. Additionally, contrasts between sad music with versus without lyrics recruited the parahippocampal gyrus, the amygdala, the claustrum, the putamen, the precentral gyrus, the medial and inferior frontal gyri (including Broca’s area), and the auditory cortex, while the reverse contrast produced no activations. Happy music without lyrics activated structures of the limbic system and the right pars opercularis of the inferior frontal gyrus, whereas auditory regions alone responded to happy music with lyrics. These findings point to the role of acoustic cues for the experience of happiness in music and to the importance of lyrics for sad musical emotions.
PMCID: PMC3227856  PMID: 22144968
music; emotion; fMRI; limbic system; language; acoustic feature

Results 1-25 (1137353)