Moving to music is an essential human pleasure particularly related to musical groove. Structurally, music associated with groove is often characterised by rhythmic complexity in the form of syncopation, frequently observed in musical styles such as funk, hip-hop and electronic dance music. Structural complexity has been related to positive affect in music more broadly, but the function of syncopation in eliciting pleasure and body-movement in groove is unknown. Here we report results from a web-based survey which investigated the relationship between syncopation and ratings of wanting to move and experienced pleasure. Participants heard funk drum-breaks with varying degrees of syncopation and audio entropy, and rated the extent to which the drum-breaks made them want to move and how much pleasure they experienced. While entropy was found to be a poor predictor of wanting to move and pleasure, the results showed that medium degrees of syncopation elicited the most desire to move and the most pleasure, particularly for participants who enjoy dancing to music. Hence, there is an inverted U-shaped relationship between syncopation, body-movement and pleasure, and syncopation seems to be an important structural factor in embodied and affective responses to groove.
The score is a symbolic encoding that describes a piece of music, written according to the conventions of music theory, which must be rendered as sound (e.g., by a performer) before it may be perceived as music by the listener. In this paper we provide a step towards unifying music theory with music perception in terms of the relationship between notated rhythm (i.e., the score) and perceived syncopation. In our experiments we evaluated this relationship by manipulating the score, rendering it as sound and eliciting subjective judgments of syncopation. We used a metronome to provide explicit cues to the prevailing rhythmic structure (as defined in the time signature). Three-bar scores with time signatures of 4/4 and 6/8 were constructed using repeated one-bar rhythm-patterns, with each pattern built from basic half-bar rhythm-components. Our manipulations gave rise to various rhythmic structures, including polyrhythms and rhythms with missing strong- and/or down-beats. Listeners (N = 10) were asked to rate the degree of syncopation they perceived in response to a rendering of each score. We observed higher degrees of syncopation in time signatures of 6/8, for polyrhythms, and for rhythms featuring a missing down-beat. We also found that the location of a rhythm-component within the bar has a significant effect on perceived syncopation. Our findings provide new insight into models of syncopation and point the way towards areas in which the models may be improved.
Music evokes complex emotions beyond pleasant/unpleasant or happy/sad dichotomies usually investigated in neuroscience. Here, we used functional neuroimaging with parametric analyses based on the intensity of felt emotions to explore a wider spectrum of affective responses reported during music listening. Positive emotions correlated with activation of left striatum and insula when high-arousing (Wonder, Joy) but right striatum and orbitofrontal cortex when low-arousing (Nostalgia, Tenderness). Irrespective of their positive/negative valence, high-arousal emotions (Tension, Power, and Joy) also correlated with activations in sensory and motor areas, whereas low-arousal categories (Peacefulness, Nostalgia, and Sadness) selectively engaged ventromedial prefrontal cortex and hippocampus. The right parahippocampal cortex activated in all but positive high-arousal conditions. Results also suggested some blends between activation patterns associated with different classes of emotions, particularly for feelings of Wonder or Transcendence. These data reveal a differentiated recruitment across emotions of networks involved in reward, memory, self-reflective, and sensorimotor processes, which may account for the unique richness of musical emotions.
emotion; fMRI; music; striatum; ventro-medial prefrontal cortex
The ages of 6 male patients with the sick sinus syndrome ranged from 10-15 years when their symptoms began. At rest all had a heart rate of 60/min or less. Two had syncopal attacks which threatened life; 1 had only attacks of dizziness; the other 3 had no syncopal attacks but had recurrent attacks of supraventricular tachycardia ('brady-tachycardia syndrome') which were more resistant to drug therapy than is usual in childhood. They were not controlled or suppressed by digoxin when it was given. Substernal pain occurred in 2 patients who had syncope. In all patients the heart rate remained inappropriately slow after exercise and atropine. Cardiac pacemakers were used in the 2 patients with life-threatening syncope. Any patient who has dizziness or syncopal attacks and an inappropriately slow heart rate should have electrocardiograms recorded at rest and after exercise to record the heart rate and to look for abnormal P-waves.
Music is a powerful medium capable of eliciting a broad range of emotions. Although the relationship between language and music is well documented, relatively little is known about the effects of lyrics and the voice on the emotional processing of music and on listeners' preferences. In the present study, we investigated the effects of vocals in music on participants' perceived valence and arousal in songs. Participants (N = 50) made valence and arousal ratings for familiar songs that were presented with and without the voice. We observed robust effects of vocal content on perceived arousal. Furthermore, we found that the effect of the voice on enhancing arousal ratings is independent of familiarity of the song and differs across genders and age: females were more influenced by vocals than males; furthermore these gender effects were enhanced among older adults. Results highlight the effects of gender and aging in emotion perception and are discussed in terms of the social roles of music.
emotion; music; arousal; perception; gender; aging
This paper proposes a method to translate human EEG into music, so as to represent mental state by music. The arousal levels of the brain mental state and music emotion are implicitly used as the bridge between the mind world and the music. The arousal level of the brain is based on the EEG features extracted mainly by wavelet analysis, and the music arousal level is related to the musical parameters such as pitch, tempo, rhythm, and tonality. While composing, some music principles (harmonics and structure) were taken into consideration. With EEGs during various sleep stages as an example, the music generated from them had different patterns of pitch, rhythm, and tonality. 35 volunteers listened to the music pieces, and significant difference in music arousal levels was found. It implied that different mental states may be identified by the corresponding music, and so the music from EEG may be a potential tool for EEG monitoring, biofeedback therapy, and so forth.
In general, sad music is thought to cause us to experience sadness, which is considered an unpleasant emotion. As a result, the question arises as to why we listen to sad music if it evokes sadness. One possible answer to this question is that we may actually feel positive emotions when we listen to sad music. This suggestion may appear to be counterintuitive; however, in this study, by dividing musical emotion into perceived emotion and felt emotion, we investigated this potential emotional response to music. We hypothesized that felt and perceived emotion may not actually coincide in this respect: sad music would be perceived as sad, but the experience of listening to sad music would evoke positive emotions. A total of 44 participants listened to musical excerpts and provided data on perceived and felt emotions by rating 62 descriptive words or phrases related to emotions on a scale that ranged from 0 (not at all) to 4 (very much). The results revealed that the sad music was perceived to be more tragic, whereas the actual experiences of the participants listening to the sad music induced them to feel more romantic, more blithe, and less tragic emotions than they actually perceived with respect to the same music. Thus, the participants experienced ambivalent emotions when they listened to the sad music. After considering the possible reasons that listeners were induced to experience emotional ambivalence by the sad music, we concluded that the formulation of a new model would be essential for examining the emotions induced by music and that this new model must entertain the possibility that what we experience when listening to music is vicarious emotion.
sad music; vicarious emotion; perceived/felt emotion; ambivalent emotion; pleasant emotion
Specific pieces of music can elicit strong emotions in listeners and, possibly in connection with these emotions, can be remembered even years later. However, episodic memory for emotional music compared with less emotional music has not yet been examined. We investigated whether emotional music is remembered better than less emotional music. Also, we examined the influence of musical structure on memory performance.
Recognition of 40 musical excerpts was investigated as a function of arousal, valence, and emotional intensity ratings of the music. In the first session the participants judged valence and arousal of the musical pieces. One week later, participants listened to the 40 old and 40 new musical excerpts randomly interspersed and were asked to make an old/new decision as well as to indicate arousal and valence of the pieces. Musical pieces that were rated as very positive were recognized significantly better.
Musical excerpts rated as very positive are remembered better. Valence seems to be an important modulator of episodic long-term memory for music. Evidently, strong emotions related to the musical experience facilitate memory formation and retrieval.
Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced) and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic) and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception.
Blood centers rely heavily upon adolescent donors to meet blood demand, but pre-syncope and syncope are more frequent in younger donors. Studies have suggested administration of water prior to donation may reduce syncope and/or pre-syncope in this group.
Study design and methods
We conducted a randomized, controlled trial to establish the effect of pre-loading with 500ml of water on the rate of syncope and pre-syncope in adolescent donors. School collection sites in Eastern Cape Province of South Africa were randomized to receive water or not. Incidence of syncope and pre-syncope was compared between randomization groups using multivariable logistic regression.
Of 2,464 study participants, 1,337 received water and 1,127 did not; groups differed slightly by gender and race. Syncope or pre-syncope was seen in 23 (1.7%) of the treatment and 18 (1.6%) of the control arm subjects. After adjusting for race, gender, age and donation history, there was no difference in outcome between the water versus no water arms (adjusted odds ratio (OR) = 0.80 (95% CI 0.42–1.53). Black donors had 7-fold lower odds of syncope or pre-syncope than their white counterparts (adjusted OR 0.14, 95% CI 0.04–0.47).
Preloading adolescent donors with 500ml of water did not have a major effect in reducing syncope and pre-syncope in South African adolescent donors. Our adolescent donors had a lower overall syncope and pre-syncope rate than similar populations in the USA, limiting the statistical power of the study. We confirmed much lower rates of syncope and pre-syncope among young Black donors.
Blood Donors; Syncope; Randomized control trial; South-Africa; Adolescent
Musicians imagine music during mental rehearsal, when reading from a score, and while composing. An important characteristic of music is its temporality. Among the parameters that vary through time is sound intensity, perceived as patterns of loudness. Studies of mental imagery for melodies (i.e., pitch and rhythm) show interference from concurrent musical pitch and verbal tasks, but how we represent musical changes in loudness is unclear. Theories suggest that our perceptions of loudness change relate to our perceptions of force or effort, implying a motor representation. An experiment was conducted to investigate the modalities that contribute to imagery for loudness change. Musicians performed a within-subjects loudness change recall task, comprising 48 trials. First, participants heard a musical scale played with varying patterns of loudness, which they were asked to remember. There followed an empty interval of 8 s (nil distractor control), or the presentation of a series of four sine tones, or four visual letters or three conductor gestures, also to be remembered. Participants then saw an unfolding score of the notes of the scale, during which they were to imagine the corresponding scale in their mind while adjusting a slider to indicate the imagined changes in loudness. Finally, participants performed a recognition task of the tone, letter, or gesture sequence. Based on the motor hypothesis, we predicted that observing and remembering conductor gestures would impair loudness change scale recall, while observing and remembering tone or letter string stimuli would not. Results support this prediction, with loudness change recalled less accurately in the gestures condition than in the control condition. An effect of musical training suggests that auditory and motor imagery ability may be closely related to domain expertise.
mental imagery; loudness; music; motor processing; melody; working memory
A 68-year-old woman with a history of dilated non-ischemic cardiomyopathy presented with syncope. The index ECG showed sinus rhythm with left bundle branch block. On telemetry episodes of sinus rhythm with narrower QRS complexes conduced in 2:1 pattern were noted. Invasive electrophysiological study was performed to determine cause of syncope. Normal conduction up to the AV node with an AH interval of 79 ms (normal = 55–125 ms) was observed. However, every alternate sinus beat was blocked after the inscription of His deflection (infra-Hisian block). The narrow beats conducted through the His bundle with HV intervals of 54 ms (normal = 35–55 ms). When 1:1 conduction resumed further abnormality of the His–Purkinje conduction system became evident with a QRS morphology that of an LBBB and prolongation of HV interval (HV = 96 ms). Criteria to differentiate nodal versus infranodal block based on electrophysiological properties of the nodal and infranodal system are discussed.
His–Purkinje system; Atrioventricular node; 2:1 block; Atrioventricular nodal block; Infra-Hisian block
Most perceived parameters of sound (e.g. pitch, duration, timbre) can also be imagined in the absence of sound. These parameters are imagined more veridically by expert musicians than non-experts. Evidence for whether loudness is imagined, however, is conflicting. In music, the question of whether loudness is imagined is particularly relevant due to its role as a principal parameter of performance expression. This study addressed the hypothesis that the veridicality of imagined loudness improves with increasing musical expertise. Experts, novices and non-musicians imagined short passages of well-known classical music under two counterbalanced conditions: 1) while adjusting a slider to indicate imagined loudness of the music and 2) while tapping out the rhythm to indicate imagined timing. Subtests assessed music listening abilities and working memory span to determine whether these factors, also hypothesised to improve with increasing musical expertise, could account for imagery task performance. Similarity between each participant’s imagined and listening loudness profiles and reference recording intensity profiles was assessed using time series analysis and dynamic time warping. The results suggest a widespread ability to imagine the loudness of familiar music. The veridicality of imagined loudness tended to be greatest for the expert musicians, supporting the predicted relationship between musical expertise and musical imagery ability.
Williams syndrome (WS), a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing (TD) population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and TD individuals with and without amusia.
Williams syndrome; music; amusia; pitch perception; auditory sensitivity
One hundred and sixty consecutive patients less than 50 years of age (mean 38 years) referred for long term electrocardiographic recording were evaluated retrospectively. Significant cardiac arrhythmias were detected in 51 of 107 (48%) patients examined because of syncope or dizzy spells or both. Of 39 patients examined for cardiac complaints or presumed complex arrhythmias, 15 (38%) had significant arrhythmias. Of 14 patients examined because of otherwise unexplained strokes, nine had slow sinus rates. Of these, one patient had recently undertaken moderately intensive athletic activity and four had been undertaking vigorous athletic activities for several years. All of the 12 active athletes who were followed up on account of syncope or dizzy spells were free of symptoms after reducing their athletic activities. The cardiac rhythm returned to normal in four out of five who underwent repeat long term electrocardiographic recording. It is suggested that vigorous athletic activity in subjects of 30-50 years of age may transform the adaptative bradycardia of the athlete into a condition similar to the embolising sick sinus syndrome.
Apart from its natural relevance to cognition, music provides a window into the intimate relationships between production, perception, experience, and emotion. Here, emotional responses and neural activity were observed as they evolved together with stimulus parameters over several minutes. Participants listened to a skilled music performance that included the natural fluctuations in timing and sound intensity that musicians use to evoke emotional responses. A mechanical performance of the same piece served as a control. Before and after fMRI scanning, participants reported real-time emotional responses on a 2-dimensional rating scale (arousal and valence) as they listened to each performance. During fMRI scanning, participants listened without reporting emotional responses. Limbic and paralimbic brain areas responded to the expressive dynamics of human music performance, and both emotion and reward related activations during music listening were dependent upon musical training. Moreover, dynamic changes in timing predicted ratings of emotional arousal, as well as real-time changes in neural activity. BOLD signal changes correlated with expressive timing fluctuations in cortical and subcortical motor areas consistent with pulse perception, and in a network consistent with the human mirror neuron system. These findings show that expressive music performance evokes emotion and reward related neural activations, and that music's affective impact on the brains of listeners is altered by musical training. Our observations are consistent with the idea that music performance evokes an emotional response through a form of empathy that is based, at least in part, on the perception of movement and on violations of pulse-based temporal expectancies.
Although human musical performances represent one of the most valuable achievements of mankind, the best musicians perform imperfectly. Musical rhythms are not entirely accurate and thus inevitably deviate from the ideal beat pattern. Nevertheless, computer generated perfect beat patterns are frequently devalued by listeners due to a perceived lack of human touch. Professional audio editing software therefore offers a humanizing feature which artificially generates rhythmic fluctuations. However, the built-in humanizing units are essentially random number generators producing only simple uncorrelated fluctuations. Here, for the first time, we establish long-range fluctuations as an inevitable natural companion of both simple and complex human rhythmic performances. Moreover, we demonstrate that listeners strongly prefer long-range correlated fluctuations in musical rhythms. Thus, the favorable fluctuation type for humanizing interbeat intervals coincides with the one generically inherent in human musical performances.
Falls are frequent in the elderly and affect mortality, morbidity, loss of functional capacity and institutionalization. In the older patient the incidence of falls can sometimes be underestimated, even in the absence of a clear cognitive impairment, because it is often difficult to reconstruct the dynamics. It is quite common that forms due to syncope are associated with retrograde amnesia and in 40 to 60% of the cases falls happen in the absence of witnesses. The pathogenesis of falls is often multifactorial, due to physiological age-related changes or more properly pathological factors, or due to the environment. The identification of risk factors is essential in the planning of preventive measures. Syncope is one of major causes of falls. About 20% of cardiovascular syncope in patients older than 70 appears as a fall and more than 20% of older people with Carotid Sinus Syndrome complain of falls as well as syncope. These data clearly state that older patients with history of falls should undergo a cardiovascular and neuroautonomic assessment besides the survey of other risk factors. Multifactorial assessment requires a synergy of various specialists. The geriatrician coordinates the multidisciplinary intervention in order to make the most effective evaluation of the risk of falling, searching for all predisposing factors, aiming towards a program of prevention. In clear pathological conditions it is possible to enact a specific treatment. Particular attention must indeed be paid to the re-evaluation of drug therapy, with dose adjustments or withdrawal especially for antihypertensive, diuretics and benzodiazepines. The Guidelines of the American Geriatrics Society recommend modification of environmental hazards, training paths, hip protectors and appropriate use of support tools (sticks, walkers), which can be effective elements of a multifactorial intervention program. Balance exercises are also recommended. In conclusion, an initial assessment, supported by a comprehensive cardiovascular and neuroautonomic evaluation, allows for reaching a final diagnosis in most cases, demonstrating a key role in the real identification of the etiology of the fall and implementing the treatment measures.
falls; elderly; multifactorial assessment; prevention strategies
Patterns of syncope evaluation vary widely among physicians and hospitals. The aim of this study was to assess current diagnostic patterns and medical costs in the evaluation of patients presenting with syncope at the emergency department (ED) or the outpatient department (OPD) of a referral hospital.
Materials and Methods
This study included 171 consecutive patients with syncope, who visited the ED or OPD between January 2009 and July 2009.
The ED group had fewer episodes of syncope [2 (1-2) vs. 2 (1-5), p=0.014] and fewer prodromal symptoms (81.5% vs. 93.3%, p=0.018) than the OPD group. Diagnostic tests were more frequently performed in the ED group than in the OPD group (6.2±1.7 vs. 5.3±2.0; p=0.012). In addition, tests with low diagnostic yields were more frequently used in the ED group than in the OPD group. The total cost of syncope evaluation per patient was higher in the ED group than in the OPD group [823000 (440000-1408000) won vs. 420000 (186000-766000) won, p<0.001].
There were some differences in the clinical characteristics of patients and diagnostic patterns in the evaluation of syncope between the ED and the OPD groups. Therefore, a selective diagnostic approach according to the presentation site is needed to improve diagnostic yields and to reduce the time and costs of evaluation of syncope.
Syncope; diagnosis; cost-benefit analysis
A 32-year-old Spanish man presented to hospital after a second episode of syncope immediately following exercise. On admission, his vitals signs were stable and he had a regular heart rate of 60 bpm. ECG and transthoracic echocardiogram were normal. He completed 15 min of a BRUCE protocol exercise test. One minute and ten seconds into recovery, he lost consciousness. His ECG demonstrated sinus arrest with pauses of up to 5 s and subsequently junctional ectopy. After 38 s, his heart returned to sinus rhythm at a rate of 140 bpm and he regained consciousness. Vasovagal syncope following exercise in the absence of structural heart disease is uncommonly reported. When cases of exercise-related syncope in patients with structurally normal hearts have been reported, the typical patient is a young male who engages in physical training. Treatment strategies in patients suffering with vasovagal asystole are necessarily empirical, and careful judgement based on the specific features of the individual cases needs to be employed.
Despite a wealth of evidence for the involvement of the autonomic nervous system (ANS) in health and disease and the ability of music to affect ANS activity, few studies have systematically explored the therapeutic effects of music on ANS dysfunction. Furthermore, when ANS activity is quantified and analyzed, it is usually from a point of convenience rather than from an understanding of its physiological basis. After a review of the experimental and therapeutic literatures exploring music and the ANS, a “Neurovisceral Integration” perspective on the interplay between the central and autonomic nervous systems is introduced, and the associated implications for physiological, emotional, and cognitive health are explored. The construct of heart rate variability is discussed both as an example of this complex interplay and as a useful metric for exploring the sometimes subtle effect of music on autonomic response. Suggestions for future investigations using musical interventions are offered based on this integrative account.
autonomic nervous system; entrainment; heart rate variability; neurovisceral integration; psychophysiological responses
Despite much recent interest in the clinical neuroscience of music processing, the cognitive organization of music as a domain of non-verbal knowledge has been little studied. Here we addressed this issue systematically in two expert musicians with clinical diagnoses of semantic dementia and Alzheimer’s disease, in comparison with a control group of healthy expert musicians. In a series of neuropsychological experiments, we investigated associative knowledge of musical compositions (musical objects), musical emotions, musical instruments (musical sources) and music notation (musical symbols). These aspects of music knowledge were assessed in relation to musical perceptual abilities and extra-musical neuropsychological functions. The patient with semantic dementia showed relatively preserved recognition of musical compositions and musical symbols despite severely impaired recognition of musical emotions and musical instruments from sound. In contrast, the patient with Alzheimer’s disease showed impaired recognition of compositions, with somewhat better recognition of composer and musical era, and impaired comprehension of musical symbols, but normal recognition of musical emotions and musical instruments from sound. The findings suggest that music knowledge is fractionated, and superordinate musical knowledge is relatively more robust than knowledge of particular music. We propose that music constitutes a distinct domain of non-verbal knowledge but shares certain cognitive organizational features with other brain knowledge systems. Within the domain of music knowledge, dissociable cognitive mechanisms process knowledge derived from physical sources and the knowledge of abstract musical entities.
music; semantic memory; dementia; semantic dementia; Alzheimer’s disease
Electroencephalography (EEG)-based emotion classification during music listening has gained increasing attention nowadays due to its promise of potential applications such as musical affective brain-computer interface (ABCI), neuromarketing, music therapy, and implicit multimedia tagging and triggering. However, music is an ecologically valid and complex stimulus that conveys certain emotions to listeners through compositions of musical elements. Using solely EEG signals to distinguish emotions remained challenging. This study aimed to assess the applicability of a multimodal approach by leveraging the EEG dynamics and acoustic characteristics of musical contents for the classification of emotional valence and arousal. To this end, this study adopted machine-learning methods to systematically elucidate the roles of the EEG and music modalities in the emotion modeling. The empirical results suggested that when whole-head EEG signals were available, the inclusion of musical contents did not improve the classification performance. The obtained performance of 74~76% using solely EEG modality was statistically comparable to that using the multimodality approach. However, if EEG dynamics were only available from a small set of electrodes (likely the case in real-life applications), the music modality would play a complementary role and augment the EEG results from around 61–67% in valence classification and from around 58–67% in arousal classification. The musical timber appeared to replace less-discriminative EEG features and led to improvements in both valence and arousal classification, whereas musical loudness was contributed specifically to the arousal classification. The present study not only provided principles for constructing an EEG-based multimodal approach, but also revealed the fundamental insights into the interplay of the brain activity and musical contents in emotion modeling.
EEG; emotion classification; affective brain-computer interface; music signal processing; music listening
Due to lack of efficacy in recent trials, current guidelines for the treatment of neurally-mediated (vasovagal) syncope do not promote cardiac pacemaker implantation. However, the finding of asystole during head-up tilt –induced (pre)syncope may lead to excessive cardioinhibitory syncope diagnosis and treatment with cardiac pacemakers as blood pressure is often discontinuously measured. Furthermore, physicians may be more inclined to implant cardiac pacemakers in older patients. We hypothesized that true cardioinhibitory syncope in which the decrease in heart rate precedes the fall in blood pressure is a very rare finding which might explain the lack of efficacy of pacemakers in neurally-mediated syncope.
We studied 173 consecutive patients referred for unexplained syncope (114 women, 59 men, 42±1 years, 17±2 syncopal episodes). All had experienced (pre)syncope during head-up tilt testing followed by additional lower body negative suction. We classified hemodynamic responses according to the modified Vasovagal Syncope International Study (VASIS) classification as mixed response (VASIS I), cardioinhibitory without (VASIS IIa) or with asystole (VASIS IIb), and vasodepressor (VASIS III). Then, we defined the exact temporal relationship between hypotension and bradycardia to identify patients with true cardioinhibitory syncope.
Of the (pre)syncopal events during tilt testing, 63% were classified as VASIS I, 6% as VASIS IIb, 2% as VASIS IIa, and 29% as VASIS III. Cardioinhibitory responses (VASIS class II) progressively decreased from the youngest to the oldest age quartile. With more detailed temporal analysis, blood pressure reduction preceded the heart-rate decrease in all but six individuals (97%) overall and in 10 out of 11 patients with asystole (VASIS IIb).
Hypotension precedes bradycardia onset during head-up tilt-induced (pre)syncope in the vast majority of patients, even in those classified as cardioinhibitory syncope according to the modified VASIS classification. Furthermore, cardioinhibitory syncope becomes less frequent with increasing age.
Background: Music can elicit strong emotions and can be remembered in connection with these emotions even decades later. Yet, the brain correlates of episodic memory for highly emotional music compared with less emotional music have not been examined. We therefore used fMRI to investigate brain structures activated by emotional processing of short excerpts of film music successfully retrieved from episodic long-term memory.
Methods: Eighteen non-musicians volunteers were exposed to 60 structurally similar pieces of film music of 10 s length with high arousal ratings and either less positive or very positive valence ratings. Two similar sets of 30 pieces were created. Each of these was presented to half of the participants during the encoding session outside of the scanner, while all stimuli were used during the second recognition session inside the MRI-scanner. During fMRI each stimulation period (10 s) was followed by a 20 s resting period during which participants pressed either the “old” or the “new” button to indicate whether they had heard the piece before.
Results: Musical stimuli vs. silence activated the bilateral superior temporal gyrus, right insula, right middle frontal gyrus, bilateral medial frontal gyrus and the left anterior cerebellum. Old pieces led to activation in the left medial dorsal thalamus and left midbrain compared to new pieces. For recognized vs. not recognized old pieces a focused activation in the right inferior frontal gyrus and the left cerebellum was found. Positive pieces activated the left medial frontal gyrus, the left precuneus, the right superior frontal gyrus, the left posterior cingulate, the bilateral middle temporal gyrus, and the left thalamus compared to less positive pieces.
Conclusion: Specific brain networks related to memory retrieval and emotional processing of symphonic film music were identified. The results imply that the valence of a music piece is important for memory performance and is recognized very fast.
musical memory; episodic memory; emotions; brain-processing