PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (774106)

Clipboard (0)
None

Related Articles

1.  Syncopation and the Score 
PLoS ONE  2013;8(9):e74692.
The score is a symbolic encoding that describes a piece of music, written according to the conventions of music theory, which must be rendered as sound (e.g., by a performer) before it may be perceived as music by the listener. In this paper we provide a step towards unifying music theory with music perception in terms of the relationship between notated rhythm (i.e., the score) and perceived syncopation. In our experiments we evaluated this relationship by manipulating the score, rendering it as sound and eliciting subjective judgments of syncopation. We used a metronome to provide explicit cues to the prevailing rhythmic structure (as defined in the time signature). Three-bar scores with time signatures of 4/4 and 6/8 were constructed using repeated one-bar rhythm-patterns, with each pattern built from basic half-bar rhythm-components. Our manipulations gave rise to various rhythmic structures, including polyrhythms and rhythms with missing strong- and/or down-beats. Listeners (N = 10) were asked to rate the degree of syncopation they perceived in response to a rendering of each score. We observed higher degrees of syncopation in time signatures of 6/8, for polyrhythms, and for rhythms featuring a missing down-beat. We also found that the location of a rhythm-component within the bar has a significant effect on perceived syncopation. Our findings provide new insight into models of syncopation and point the way towards areas in which the models may be improved.
doi:10.1371/journal.pone.0074692
PMCID: PMC3769263  PMID: 24040323
2.  Syncopation, Body-Movement and Pleasure in Groove Music 
PLoS ONE  2014;9(4):e94446.
Moving to music is an essential human pleasure particularly related to musical groove. Structurally, music associated with groove is often characterised by rhythmic complexity in the form of syncopation, frequently observed in musical styles such as funk, hip-hop and electronic dance music. Structural complexity has been related to positive affect in music more broadly, but the function of syncopation in eliciting pleasure and body-movement in groove is unknown. Here we report results from a web-based survey which investigated the relationship between syncopation and ratings of wanting to move and experienced pleasure. Participants heard funk drum-breaks with varying degrees of syncopation and audio entropy, and rated the extent to which the drum-breaks made them want to move and how much pleasure they experienced. While entropy was found to be a poor predictor of wanting to move and pleasure, the results showed that medium degrees of syncopation elicited the most desire to move and the most pleasure, particularly for participants who enjoy dancing to music. Hence, there is an inverted U-shaped relationship between syncopation, body-movement and pleasure, and syncopation seems to be an important structural factor in embodied and affective responses to groove.
doi:10.1371/journal.pone.0094446
PMCID: PMC3989225  PMID: 24740381
3.  Rhythmic complexity and predictive coding: a novel approach to modeling rhythm and meter perception in music 
Frontiers in Psychology  2014;5:1111.
Musical rhythm, consisting of apparently abstract intervals of accented temporal events, has a remarkable capacity to move our minds and bodies. How does the cognitive system enable our experiences of rhythmically complex music? In this paper, we describe some common forms of rhythmic complexity in music and propose the theory of predictive coding (PC) as a framework for understanding how rhythm and rhythmic complexity are processed in the brain. We also consider why we feel so compelled by rhythmic tension in music. First, we consider theories of rhythm and meter perception, which provide hierarchical and computational approaches to modeling. Second, we present the theory of PC, which posits a hierarchical organization of brain responses reflecting fundamental, survival-related mechanisms associated with predicting future events. According to this theory, perception and learning is manifested through the brain’s Bayesian minimization of the error between the input to the brain and the brain’s prior expectations. Third, we develop a PC model of musical rhythm, in which rhythm perception is conceptualized as an interaction between what is heard (“rhythm”) and the brain’s anticipatory structuring of music (“meter”). Finally, we review empirical studies of the neural and behavioral effects of syncopation, polyrhythm and groove, and propose how these studies can be seen as special cases of the PC theory. We argue that musical rhythm exploits the brain’s general principles of prediction and propose that pleasure and desire for sensorimotor synchronization from musical rhythm may be a result of such mechanisms.
doi:10.3389/fpsyg.2014.01111
PMCID: PMC4181238  PMID: 25324813
rhythm; meter; rhythmic complexity; predictive coding; pleasure
4.  The Role of Emotion in Musical Improvisation: An Analysis of Structural Features 
PLoS ONE  2014;9(8):e105144.
One of the primary functions of music is to convey emotion, yet how music accomplishes this task remains unclear. For example, simple correlations between mode (major vs. minor) and emotion (happy vs. sad) do not adequately explain the enormous range, subtlety or complexity of musically induced emotions. In this study, we examined the structural features of unconstrained musical improvisations generated by jazz pianists in response to emotional cues. We hypothesized that musicians would not utilize any universal rules to convey emotions, but would instead combine heterogeneous musical elements together in order to depict positive and negative emotions. Our findings demonstrate a lack of simple correspondence between emotions and musical features of spontaneous musical improvisation. While improvisations in response to positive emotional cues were more likely to be in major keys, have faster tempos, faster key press velocities and more staccato notes when compared to negative improvisations, there was a wide distribution for each emotion with components that directly violated these primary associations. The finding that musicians often combine disparate features together in order to convey emotion during improvisation suggests that structural diversity may be an essential feature of the ability of music to express a wide range of emotion.
doi:10.1371/journal.pone.0105144
PMCID: PMC4140734  PMID: 25144200
5.  Mapping Aesthetic Musical Emotions in the Brain 
Cerebral Cortex (New York, NY)  2011;22(12):2769-2783.
Music evokes complex emotions beyond pleasant/unpleasant or happy/sad dichotomies usually investigated in neuroscience. Here, we used functional neuroimaging with parametric analyses based on the intensity of felt emotions to explore a wider spectrum of affective responses reported during music listening. Positive emotions correlated with activation of left striatum and insula when high-arousing (Wonder, Joy) but right striatum and orbitofrontal cortex when low-arousing (Nostalgia, Tenderness). Irrespective of their positive/negative valence, high-arousal emotions (Tension, Power, and Joy) also correlated with activations in sensory and motor areas, whereas low-arousal categories (Peacefulness, Nostalgia, and Sadness) selectively engaged ventromedial prefrontal cortex and hippocampus. The right parahippocampal cortex activated in all but positive high-arousal conditions. Results also suggested some blends between activation patterns associated with different classes of emotions, particularly for feelings of Wonder or Transcendence. These data reveal a differentiated recruitment across emotions of networks involved in reward, memory, self-reflective, and sensorimotor processes, which may account for the unique richness of musical emotions.
doi:10.1093/cercor/bhr353
PMCID: PMC3491764  PMID: 22178712
emotion; fMRI; music; striatum; ventro-medial prefrontal cortex
6.  What musicians do to induce the sensation of groove in simple and complex melodies, and how listeners perceive it 
Groove is the experience of wanting to move when hearing music, such as snapping fingers or tapping feet. This is a central aspect of much music, in particular of music intended for dancing. While previous research has found considerable consistency in ratings of groove across individuals, it remains unclear how groove is induced, that is, what are the physical properties of the acoustic signal that differ between more and less groove-inducing versions. Here, we examined this issue with a performance experiment, in which four musicians performed six simple and six complex melodies in two conditions with the intention of minimizing and maximizing groove. Analyses of rhythmical and temporal properties from the performances demonstrated some general effects. For example, more groove was associated with more notes on faster metrical levels and syncopation, and less groove was associated with deadpan timing and destruction of the regular pulse. We did not observe that deviations from the metrical grid [i.e., micro-timing (MT)] were a predictor of groove. A listener experiment confirmed that the musicians' manipulations had the intended effects on the experience of groove. A Brunswikian lens model was applied, which estimates the performer-perceiver communication across the two experiments. It showed that the communication achievement for simple melodies was 0.62, and that the matching of performers' and listeners' use of nine rhythmical cues was 0.83. For complex melodies with an already high level of groove, the corresponding values were 0.39 and 0.34, showing that it was much more difficult to “take out” groove from musical structures designed to induce groove.
doi:10.3389/fpsyg.2014.00894
PMCID: PMC4137755  PMID: 25191286
groove; music; musicians; movement; rhythm; syncopation; micro-timing
7.  Dynamic Emotional and Neural Responses to Music Depend on Performance Expression and Listener Experience 
PLoS ONE  2010;5(12):e13812.
Apart from its natural relevance to cognition, music provides a window into the intimate relationships between production, perception, experience, and emotion. Here, emotional responses and neural activity were observed as they evolved together with stimulus parameters over several minutes. Participants listened to a skilled music performance that included the natural fluctuations in timing and sound intensity that musicians use to evoke emotional responses. A mechanical performance of the same piece served as a control. Before and after fMRI scanning, participants reported real-time emotional responses on a 2-dimensional rating scale (arousal and valence) as they listened to each performance. During fMRI scanning, participants listened without reporting emotional responses. Limbic and paralimbic brain areas responded to the expressive dynamics of human music performance, and both emotion and reward related activations during music listening were dependent upon musical training. Moreover, dynamic changes in timing predicted ratings of emotional arousal, as well as real-time changes in neural activity. BOLD signal changes correlated with expressive timing fluctuations in cortical and subcortical motor areas consistent with pulse perception, and in a network consistent with the human mirror neuron system. These findings show that expressive music performance evokes emotion and reward related neural activations, and that music's affective impact on the brains of listeners is altered by musical training. Our observations are consistent with the idea that music performance evokes an emotional response through a form of empathy that is based, at least in part, on the perception of movement and on violations of pulse-based temporal expectancies.
doi:10.1371/journal.pone.0013812
PMCID: PMC3002933  PMID: 21179549
8.  Sad music induces pleasant emotion 
In general, sad music is thought to cause us to experience sadness, which is considered an unpleasant emotion. As a result, the question arises as to why we listen to sad music if it evokes sadness. One possible answer to this question is that we may actually feel positive emotions when we listen to sad music. This suggestion may appear to be counterintuitive; however, in this study, by dividing musical emotion into perceived emotion and felt emotion, we investigated this potential emotional response to music. We hypothesized that felt and perceived emotion may not actually coincide in this respect: sad music would be perceived as sad, but the experience of listening to sad music would evoke positive emotions. A total of 44 participants listened to musical excerpts and provided data on perceived and felt emotions by rating 62 descriptive words or phrases related to emotions on a scale that ranged from 0 (not at all) to 4 (very much). The results revealed that the sad music was perceived to be more tragic, whereas the actual experiences of the participants listening to the sad music induced them to feel more romantic, more blithe, and less tragic emotions than they actually perceived with respect to the same music. Thus, the participants experienced ambivalent emotions when they listened to the sad music. After considering the possible reasons that listeners were induced to experience emotional ambivalence by the sad music, we concluded that the formulation of a new model would be essential for examining the emotions induced by music and that this new model must entertain the possibility that what we experience when listening to music is vicarious emotion.
doi:10.3389/fpsyg.2013.00311
PMCID: PMC3682130  PMID: 23785342
sad music; vicarious emotion; perceived/felt emotion; ambivalent emotion; pleasant emotion
9.  Superior Analgesic Effect of an Active Distraction versus Pleasant Unfamiliar Sounds and Music: The Influence of Emotion and Cognitive Style 
PLoS ONE  2012;7(1):e29397.
Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced) and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic) and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception.
doi:10.1371/journal.pone.0029397
PMCID: PMC3252324  PMID: 22242169
10.  The “Musical Emotional Bursts”: a validated set of musical affect bursts to investigate auditory affective processing 
The Musical Emotional Bursts (MEB) consist of 80 brief musical executions expressing basic emotional states (happiness, sadness and fear) and neutrality. These musical bursts were designed to be the musical analog of the Montreal Affective Voices (MAV)—a set of brief non-verbal affective vocalizations portraying different basic emotions. The MEB consist of short (mean duration: 1.6 s) improvisations on a given emotion or of imitations of a given MAV stimulus, played on a violin (10 stimuli × 4 [3 emotions + neutral]), or a clarinet (10 stimuli × 4 [3 emotions + neutral]). The MEB arguably represent a primitive form of music emotional expression, just like the MAV represent a primitive form of vocal, non-linguistic emotional expression. To create the MEB, stimuli were recorded from 10 violinists and 10 clarinetists, and then evaluated by 60 participants. Participants evaluated 240 stimuli [30 stimuli × 4 (3 emotions + neutral) × 2 instruments] by performing either a forced-choice emotion categorization task, a valence rating task or an arousal rating task (20 subjects per task); 40 MAVs were also used in the same session with similar task instructions. Recognition accuracy of emotional categories expressed by the MEB (n:80) was lower than for the MAVs but still very high with an average percent correct recognition score of 80.4%. Highest recognition accuracies were obtained for happy clarinet (92.0%) and fearful or sad violin (88.0% each) MEB stimuli. The MEB can be used to compare the cerebral processing of emotional expressions in music and vocal communication, or used for testing affective perception in patients with communication problems.
doi:10.3389/fpsyg.2013.00509
PMCID: PMC3741467  PMID: 23964255
music; emotion; auditory stimuli; voices
11.  Fusion of electroencephalographic dynamics and musical contents for estimating emotional responses in music listening 
Electroencephalography (EEG)-based emotion classification during music listening has gained increasing attention nowadays due to its promise of potential applications such as musical affective brain-computer interface (ABCI), neuromarketing, music therapy, and implicit multimedia tagging and triggering. However, music is an ecologically valid and complex stimulus that conveys certain emotions to listeners through compositions of musical elements. Using solely EEG signals to distinguish emotions remained challenging. This study aimed to assess the applicability of a multimodal approach by leveraging the EEG dynamics and acoustic characteristics of musical contents for the classification of emotional valence and arousal. To this end, this study adopted machine-learning methods to systematically elucidate the roles of the EEG and music modalities in the emotion modeling. The empirical results suggested that when whole-head EEG signals were available, the inclusion of musical contents did not improve the classification performance. The obtained performance of 74~76% using solely EEG modality was statistically comparable to that using the multimodality approach. However, if EEG dynamics were only available from a small set of electrodes (likely the case in real-life applications), the music modality would play a complementary role and augment the EEG results from around 61–67% in valence classification and from around 58–67% in arousal classification. The musical timber appeared to replace less-discriminative EEG features and led to improvements in both valence and arousal classification, whereas musical loudness was contributed specifically to the arousal classification. The present study not only provided principles for constructing an EEG-based multimodal approach, but also revealed the fundamental insights into the interplay of the brain activity and musical contents in emotion modeling.
doi:10.3389/fnins.2014.00094
PMCID: PMC4013455  PMID: 24822035
EEG; emotion classification; affective brain-computer interface; music signal processing; music listening
12.  Are age-related differences between young and older adults in an affective working memory test sensitive to the music effects? 
There are evidences showing that music can affect cognitive performance by improving our emotional state. The aim of the current study was to analyze whether age-related differences between young and older adults in a Working Memory (WM) Span test in which the stimuli to be recalled have a different valence (i.e., neutral, positive, or negative words), are sensitive to exposure to music. Because some previous studies showed that emotional words can sustain older adults’ performance in WM, we examined whether listening to music could enhance the benefit of emotional material, with respect to neutral words, on WM performance decreasing the age-related difference between younger and older adults. In particular, the effect of two types of music (Mozart vs. Albinoni), which differ in tempo, arousal and mood induction, on age-related differences in an affective version of the Operation WM Span task was analyzed. Results showed no effect of music on the WM test regardless of the emotional content of the music (Mozart vs. Albinoni). However, a valence effect for the words in the WM task was found with a higher number of negative words recalled with respect to positive and neutral ones in both younger and older adults. When individual differences in terms of accuracy in the processing phase of the Operation Span task were considered, only younger low-performing participants were affected by the type music, with the Albinoni condition that lowered their performance with respect to the Mozart condition. Such a result suggests that individual differences in WM performance, at least when young adults are considered, could be affected by the type of music. Altogether, these findings suggest that complex span tasks, such as WM tasks, along with age-related differences are not sensitive to music effects.
doi:10.3389/fnagi.2014.00298
PMCID: PMC4227510  PMID: 25426064
music; working memory; emotions; aging; individual differences
13.  Pleasurable music affects reinforcement learning according to the listener 
Mounting evidence links the enjoyment of music to brain areas implicated in emotion and the dopaminergic reward system. In particular, dopamine release in the ventral striatum seems to play a major role in the rewarding aspect of music listening. Striatal dopamine also influences reinforcement learning, such that subjects with greater dopamine efficacy learn better to approach rewards while those with lesser dopamine efficacy learn better to avoid punishments. In this study, we explored the practical implications of musical pleasure through its ability to facilitate reinforcement learning via non-pharmacological dopamine elicitation. Subjects from a wide variety of musical backgrounds chose a pleasurable and a neutral piece of music from an experimenter-compiled database, and then listened to one or both of these pieces (according to pseudo-random group assignment) as they performed a reinforcement learning task dependent on dopamine transmission. We assessed musical backgrounds as well as typical listening patterns with the new Helsinki Inventory of Music and Affective Behaviors (HIMAB), and separately investigated behavior for the training and test phases of the learning task. Subjects with more musical experience trained better with neutral music and tested better with pleasurable music, while those with less musical experience exhibited the opposite effect. HIMAB results regarding listening behaviors and subjective music ratings indicate that these effects arose from different listening styles: namely, more affective listening in non-musicians and more analytical listening in musicians. In conclusion, musical pleasure was able to influence task performance, and the shape of this effect depended on group and individual factors. These findings have implications in affective neuroscience, neuroaesthetics, learning, and music therapy.
doi:10.3389/fpsyg.2013.00541
PMCID: PMC3748532  PMID: 23970875
music; pleasure; reinforcement learning; reward; dopamine; subjectivity; musical experience; listening strategy
14.  Syncopation creates the sensation of groove in synthesized music examples 
Frontiers in Psychology  2014;5:1036.
In order to better understand the musical properties which elicit an increased sensation of wanting to move when listening to music—groove—we investigate the effect of adding syncopation to simple piano melodies, under the hypothesis that syncopation is correlated to groove. Across two experiments we examine listeners' experience of groove to synthesized musical stimuli covering a range of syncopation levels and densities of musical events, according to formal rules implemented by a computer algorithm that shifts musical events from strong to weak metrical positions. Results indicate that moderate levels of syncopation lead to significantly higher groove ratings than melodies without any syncopation or with maximum possible syncopation. A comparison between the various transformations and the way they were rated shows that there is no simple relation between syncopation magnitude and groove.
doi:10.3389/fpsyg.2014.01036
PMCID: PMC4165312  PMID: 25278923
groove; syncopation; movement; rhythm; listening experiment; music
15.  Music Perception and Appraisal: Cochlear Implant Users and Simulated CI Listening 
Background
The inability to hear music well may contribute to decreased quality of life for cochlear implant (CI) users. Researchers have reported recently on the generally poor ability of CI users’ to perceive music, and a few researchers have reported on the enjoyment of music by CI users. However, the relation between music perception skills and music enjoyment is much less explored. Only one study has attempted to predict CI users’ enjoyment and perception of music from the users’ demographic variables and other perceptual skills (Gfeller et al., 2008). Gfeller’s results yielded different predictive relationships for music perception and music enjoyment, and the relationships were weak, at best.
Purpose
The first goal of this study is to clarify the nature and relationship between music perception skills and musical enjoyment for CI users, by employing a battery of music tests. The second goal is to determine whether normal hearing (NH) subjects, listening with a CI-simulation, can be used as a model to represent actual CI users for either music enjoyment ratings or music perception tasks.
Research Design
A prospective, cross-sectional observational study. Original music stimuli (unprocessed) were presented to CI users, and music stimuli processed with CI-simulation software were presented to twenty NH listeners (CIsim). As a control, original music stimuli were also presented to five other NH listeners. All listeners appraised twenty-four musical excerpts, performed music perception tests, and filled out a musical background questionnaire. Music perception tests were the Appreciation of Music in Cochlear Implantees (AMICI), Montreal Battery for Evaluation of Amusia (MBEA), Melodic Contour Identification (MCI), and University of Washington Clinical Assessment of Music Perception (UW-CAMP).
Study Sample
Twenty-five NH adults (22 – 56 years old), recruited from the local and research communities, participated in the study. Ten adult CI users (46 – 80 years old), recruited from the patient population of the local adult cochlear implant program, also participated in this study.
Data Collection and Analysis
Musical excerpts were appraised using a 7-point rating scale and music perception tests were scored as designed. Analysis of variance was performed on appraisal ratings, perception scores, and questionnaire data with listener group as a factor. Correlations were computed between musical appraisal ratings and perceptual scores on each music test.
Results
Music is rated as more enjoyable by CI users than by the NH listeners hearing music through a simulation (CIsim), and the difference is statistically significant. For roughly half of the music perception tests, there are no statistically significant differences between the performance of the CI users and of the CIsim listeners. Generally, correlations between appraisal ratings and music perception scores are weak or non-existent.
Conclusions
NH adults listening to music that has been processed through a CI-simulation program are a reasonable model for actual CI users for many music perception skills, but not for rating musical enjoyment. For CI users, the apparent independence of music perception skills and music enjoyment (as assessed by appraisals), indicates that music enjoyment should not be assumed and should be examined explicitly.
doi:10.3766/jaaa.23.5.6
PMCID: PMC3400338  PMID: 22533978
music; cochlear implant; cochlear implant simulation; timbre; melody; appraisal
16.  It's not what you play, it's how you play it: Timbre affects perception of emotion in music 
Salient sensory experiences often have a strong emotional tone, but the neuropsychological relations between perceptual characteristics of sensory objects and the affective information they convey remain poorly defined. Here we addressed the relationship between sound identity and emotional information using music. In two experiments, we investigated whether perception of emotions is influenced by altering the musical instrument on which the music is played, independently of other musical features. In the first experiment, 40 novel melodies each representing one of four emotions (happiness, sadness, fear, or anger) were each recorded on four different instruments (an electronic synthesizer, a piano, a violin, and a trumpet), controlling for melody, tempo, and loudness between instruments. Healthy participants (23 young adults aged 18–30 years, 24 older adults aged 58–75 years) were asked to select which emotion they thought each musical stimulus represented in a four-alternative forced-choice task. Using a generalized linear mixed model we found a significant interaction between instrument and emotion judgement with a similar pattern in young and older adults (p < .0001 for each age group). The effect was not attributable to musical expertise. In the second experiment using the same melodies and experimental design, the interaction between timbre and perceived emotion was replicated (p < .05) in another group of young adults for novel synthetic timbres designed to incorporate timbral cues to particular emotions. Our findings show that timbre (instrument identity) independently affects the perception of emotions in music after controlling for other acoustic, cognitive, and performance factors.
doi:10.1080/17470210902765957
PMCID: PMC2683716  PMID: 19391047
Timbre; Emotion; Music; Auditory object
17.  Influence of Climate on Emergency Department Visits for Syncope: Role of Air Temperature Variability 
PLoS ONE  2011;6(7):e22719.
Background
Syncope is a clinical event characterized by a transient loss of consciousness, estimated to affect 6.2/1000 person-years, resulting in remarkable health care and social costs. Human pathophysiology suggests that heat may promote syncope during standing. We tested the hypothesis that the increase of air temperatures from January to July would be accompanied by an increased rate of syncope resulting in a higher frequency of Emergency Department (ED) visits. We also evaluated the role of maximal temperature variability in affecting ED visits for syncope.
Methodology/Principal Findings
We included 770 of 2775 consecutive subjects who were seen for syncope at four EDs between January and July 2004. This period was subdivided into three epochs of similar length: 23 January–31 March, 1 April–31 May and 1 June–31 July. Spectral techniques were used to analyze oscillatory components of day by day maximal temperature and syncope variability and assess their linear relationship.
There was no correlation between daily maximum temperatures and number of syncope. ED visits for syncope were lower in June and July when maximal temperature variability declined although the maximal temperatures themselves were higher. Frequency analysis of day by day maximal temperature variability showed a major non-random fluctuation characterized by a ∼23-day period and two minor oscillations with ∼3- and ∼7-day periods. This latter oscillation was correlated with a similar ∼7-day fluctuation in ED visits for syncope.
Conclusions/Significance
We conclude that ED visits for syncope were not predicted by daily maximal temperature but were associated with increased temperature variability. A ∼7-day rhythm characterized both maximal temperatures and ED visits for syncope variability suggesting that climate changes may have a significant effect on the mode of syncope occurrence.
doi:10.1371/journal.pone.0022719
PMCID: PMC3144938  PMID: 21818372
18.  Neurophysiological and Behavioral Responses to Music Therapy in Vegetative and Minimally Conscious States 
Assessment of awareness for those with disorders of consciousness is a challenging undertaking, due to the complex presentation of the population. Debate surrounds whether behavioral assessments provide greatest accuracy in diagnosis compared to neuro-imaging methods, and despite developments in both, misdiagnosis rates remain high. Music therapy may be effective in the assessment and rehabilitation with this population due to effects of musical stimuli on arousal, attention, and emotion, irrespective of verbal or motor deficits. However, an evidence base is lacking as to which procedures are most effective. To address this, a neurophysiological and behavioral study was undertaken comparing electroencephalogram (EEG), heart rate variability, respiration, and behavioral responses of 20 healthy subjects with 21 individuals in vegetative or minimally conscious states (VS or MCS). Subjects were presented with live preferred music and improvised music entrained to respiration (procedures typically used in music therapy), recordings of disliked music, white noise, and silence. ANOVA tests indicated a range of significant responses (p ≤ 0.05) across healthy subjects corresponding to arousal and attention in response to preferred music including concurrent increases in respiration rate with globally enhanced EEG power spectra responses (p = 0.05–0.0001) across frequency bandwidths. Whilst physiological responses were heterogeneous across patient cohorts, significant post hoc EEG amplitude increases for stimuli associated with preferred music were found for frontal midline theta in six VS and four MCS subjects, and frontal alpha in three VS and four MCS subjects (p = 0.05–0.0001). Furthermore, behavioral data showed a significantly increased blink rate for preferred music (p = 0.029) within the VS cohort. Two VS cases are presented with concurrent changes (p ≤ 0.05) across measures indicative of discriminatory responses to both music therapy procedures. A third MCS case study is presented highlighting how more sensitive selective attention may distinguish MCS from VS. The findings suggest that further investigation is warranted to explore the use of music therapy for prognostic indicators, and its potential to support neuroplasticity in rehabilitation programs.
doi:10.3389/fnhum.2013.00884
PMCID: PMC3872324  PMID: 24399950
EEG; music therapy; disorders of consciousness; assessment; diagnosis; brain injury; vegetative state; minimally conscious state
19.  Unforgettable film music: The role of emotion in episodic long-term memory for music 
BMC Neuroscience  2008;9:48.
Background
Specific pieces of music can elicit strong emotions in listeners and, possibly in connection with these emotions, can be remembered even years later. However, episodic memory for emotional music compared with less emotional music has not yet been examined. We investigated whether emotional music is remembered better than less emotional music. Also, we examined the influence of musical structure on memory performance.
Results
Recognition of 40 musical excerpts was investigated as a function of arousal, valence, and emotional intensity ratings of the music. In the first session the participants judged valence and arousal of the musical pieces. One week later, participants listened to the 40 old and 40 new musical excerpts randomly interspersed and were asked to make an old/new decision as well as to indicate arousal and valence of the pieces. Musical pieces that were rated as very positive were recognized significantly better.
Conclusion
Musical excerpts rated as very positive are remembered better. Valence seems to be an important modulator of episodic long-term memory for music. Evidently, strong emotions related to the musical experience facilitate memory formation and retrieval.
doi:10.1186/1471-2202-9-48
PMCID: PMC2430709  PMID: 18505596
20.  Musical Aptitude Is Associated with AVPR1A-Haplotypes 
PLoS ONE  2009;4(5):e5534.
Artistic creativity forms the basis of music culture and music industry. Composing, improvising and arranging music are complex creative functions of the human brain, which biological value remains unknown. We hypothesized that practicing music is social communication that needs musical aptitude and even creativity in music. In order to understand the neurobiological basis of music in human evolution and communication we analyzed polymorphisms of the arginine vasopressin receptor 1A (AVPR1A), serotonin transporter (SLC6A4), catecol-O-methyltranferase (COMT), dopamin receptor D2 (DRD2) and tyrosine hydroxylase 1 (TPH1), genes associated with social bonding and cognitive functions in 19 Finnish families (n = 343 members) with professional musicians and/or active amateurs. All family members were tested for musical aptitude using the auditory structuring ability test (Karma Music test; KMT) and Carl Seashores tests for pitch (SP) and for time (ST). Data on creativity in music (composing, improvising and/or arranging music) was surveyed using a web-based questionnaire. Here we show for the first time that creative functions in music have a strong genetic component (h2 = .84; composing h2 = .40; arranging h2 = .46; improvising h2 = .62) in Finnish multigenerational families. We also show that high music test scores are significantly associated with creative functions in music (p<.0001). We discovered an overall haplotype association with AVPR1A gene (markers RS1 and RS3) and KMT (p = 0.0008; corrected p = 0.00002), SP (p = 0.0261; corrected p = 0.0072) and combined music test scores (COMB) (p = 0.0056; corrected p = 0.0006). AVPR1A haplotype AVR+RS1 further suggested a positive association with ST (p = 0.0038; corrected p = 0.00184) and COMB (p = 0.0083; corrected p = 0.0040) using haplotype-based association test HBAT. The results suggest that the neurobiology of music perception and production is likely to be related to the pathways affecting intrinsic attachment behavior.
doi:10.1371/journal.pone.0005534
PMCID: PMC2678260  PMID: 19461995
21.  Play it again, Sam: brain correlates of emotional music recognition 
Background: Music can elicit strong emotions and can be remembered in connection with these emotions even decades later. Yet, the brain correlates of episodic memory for highly emotional music compared with less emotional music have not been examined. We therefore used fMRI to investigate brain structures activated by emotional processing of short excerpts of film music successfully retrieved from episodic long-term memory.
Methods: Eighteen non-musicians volunteers were exposed to 60 structurally similar pieces of film music of 10 s length with high arousal ratings and either less positive or very positive valence ratings. Two similar sets of 30 pieces were created. Each of these was presented to half of the participants during the encoding session outside of the scanner, while all stimuli were used during the second recognition session inside the MRI-scanner. During fMRI each stimulation period (10 s) was followed by a 20 s resting period during which participants pressed either the “old” or the “new” button to indicate whether they had heard the piece before.
Results: Musical stimuli vs. silence activated the bilateral superior temporal gyrus, right insula, right middle frontal gyrus, bilateral medial frontal gyrus and the left anterior cerebellum. Old pieces led to activation in the left medial dorsal thalamus and left midbrain compared to new pieces. For recognized vs. not recognized old pieces a focused activation in the right inferior frontal gyrus and the left cerebellum was found. Positive pieces activated the left medial frontal gyrus, the left precuneus, the right superior frontal gyrus, the left posterior cingulate, the bilateral middle temporal gyrus, and the left thalamus compared to less positive pieces.
Conclusion: Specific brain networks related to memory retrieval and emotional processing of symphonic film music were identified. The results imply that the valence of a music piece is important for memory performance and is recognized very fast.
doi:10.3389/fpsyg.2014.00114
PMCID: PMC3927073  PMID: 24634661
musical memory; episodic memory; emotions; brain-processing
22.  Dynamic musical communication of core affect 
Is there something special about the way music communicates feelings? Theorists since Meyer (1956) have attempted to explain how music could stimulate varied and subtle affective experiences by violating learned expectancies, or by mimicking other forms of social interaction. Our proposal is that music speaks to the brain in its own language; it need not imitate any other form of communication. We review recent theoretical and empirical literature, which suggests that all conscious processes consist of dynamic neural events, produced by spatially dispersed processes in the physical brain. Intentional thought and affective experience arise as dynamical aspects of neural events taking place in multiple brain areas simultaneously. At any given moment, this content comprises a unified “scene” that is integrated into a dynamic core through synchrony of neuronal oscillations. We propose that (1) neurodynamic synchrony with musical stimuli gives rise to musical qualia including tonal and temporal expectancies, and that (2) music-synchronous responses couple into core neurodynamics, enabling music to directly modulate core affect. Expressive music performance, for example, may recruit rhythm-synchronous neural responses to support affective communication. We suggest that the dynamic relationship between musical expression and the experience of affect presents a unique opportunity for the study of emotional experience. This may help elucidate the neural mechanisms underlying arousal and valence, and offer a new approach to exploring the complex dynamics of the how and why of emotional experience.
doi:10.3389/fpsyg.2014.00072
PMCID: PMC3956121  PMID: 24672492
neurodynamics; consciousness; affect; emotion;  musical expectancy; oscillation; synchrony
23.  Enhancing emotional experiences to dance through music: the role of valence and arousal in the cross-modal bias 
It is well established that emotional responses to stimuli presented to one perceptive modality (e.g., visual) are modulated by the concurrent presentation of affective information to another modality (e.g., auditory)—an effect known as the cross-modal bias. However, the affective mechanisms mediating this effect are still not fully understood. It remains unclear what role different dimensions of stimulus valence and arousal play in mediating the effect, and to what extent cross-modal influences impact not only our perception and conscious affective experiences, but also our psychophysiological emotional response. We addressed these issues by measuring participants’ subjective emotion ratings and their Galvanic Skin Responses (GSR) in a cross-modal affect perception paradigm employing videos of ballet dance movements and instrumental classical music as the stimuli. We chose these stimuli to explore the cross-modal bias in a context of stimuli (ballet dance movements) that most participants would have relatively little prior experience with. Results showed (i) that the cross-modal bias was more pronounced for sad than for happy movements, whereas it was equivalent when contrasting high vs. low arousal movements; and (ii) that movement valence did not modulate participants’ GSR, while movement arousal did, such that GSR was potentiated in the case of low arousal movements with sad music and when high arousal movements were paired with happy music. Results are discussed in the context of the affective dimension of neuroentrainment and with regards to implications for the art community.
doi:10.3389/fnhum.2014.00757
PMCID: PMC4186320  PMID: 25339880
cross-modal; affective body movement; multisensory; neuroentrainment; psychology of emotion; neuroesthetics; arousal; valence
24.  Music Composition from the Brain Signal: Representing the Mental State by Music 
This paper proposes a method to translate human EEG into music, so as to represent mental state by music. The arousal levels of the brain mental state and music emotion are implicitly used as the bridge between the mind world and the music. The arousal level of the brain is based on the EEG features extracted mainly by wavelet analysis, and the music arousal level is related to the musical parameters such as pitch, tempo, rhythm, and tonality. While composing, some music principles (harmonics and structure) were taken into consideration. With EEGs during various sleep stages as an example, the music generated from them had different patterns of pitch, rhythm, and tonality. 35 volunteers listened to the music pieces, and significant difference in music arousal levels was found. It implied that different mental states may be identified by the corresponding music, and so the music from EEG may be a potential tool for EEG monitoring, biofeedback therapy, and so forth.
doi:10.1155/2010/267671
PMCID: PMC2837898  PMID: 20300580
25.  Serial binary interval ratios improve rhythm reproduction 
Musical rhythm perception is a natural human ability that involves complex cognitive processes. Rhythm refers to the organization of events in time, and musical rhythms have an underlying hierarchical metrical structure. The metrical structure induces the feeling of a beat and the extent to which a rhythm induces the feeling of a beat is referred to as its metrical strength. Binary ratios are the most frequent interval ratio in musical rhythms. Rhythms with hierarchical binary ratios are better discriminated and reproduced than rhythms with hierarchical non-binary ratios. However, it remains unclear whether a superiority of serial binary over non-binary ratios in rhythm perception and reproduction exists. In addition, how different types of serial ratios influence the metrical strength of rhythms remains to be elucidated. The present study investigated serial binary vs. non-binary ratios in a reproduction task. Rhythms formed with exclusively binary (1:2:4:8), non-binary integer (1:3:5:6), and non-integer (1:2.3:5.3:6.4) ratios were examined within a constant meter. The results showed that the 1:2:4:8 rhythm type was more accurately reproduced than the 1:3:5:6 and 1:2.3:5.3:6.4 rhythm types, and the 1:2.3:5.3:6.4 rhythm type was more accurately reproduced than the 1:3:5:6 rhythm type. Further analyses showed that reproduction performance was better predicted by the distribution pattern of event occurrences within an inter-beat interval, than by the coincidence of events with beats, or the magnitude and complexity of interval ratios. Whereas rhythm theories and empirical data emphasize the role of the coincidence of events with beats in determining metrical strength and predicting rhythm performance, the present results suggest that rhythm processing may be better understood when the distribution pattern of event occurrences is taken into account. These results provide new insights into the mechanisms underlining musical rhythm perception.
doi:10.3389/fpsyg.2013.00512
PMCID: PMC3734359  PMID: 23964258
music; rhythm; binary; ratio; beat; distribution pattern

Results 1-25 (774106)