Music evokes complex emotions beyond pleasant/unpleasant or happy/sad dichotomies usually investigated in neuroscience. Here, we used functional neuroimaging with parametric analyses based on the intensity of felt emotions to explore a wider spectrum of affective responses reported during music listening. Positive emotions correlated with activation of left striatum and insula when high-arousing (Wonder, Joy) but right striatum and orbitofrontal cortex when low-arousing (Nostalgia, Tenderness). Irrespective of their positive/negative valence, high-arousal emotions (Tension, Power, and Joy) also correlated with activations in sensory and motor areas, whereas low-arousal categories (Peacefulness, Nostalgia, and Sadness) selectively engaged ventromedial prefrontal cortex and hippocampus. The right parahippocampal cortex activated in all but positive high-arousal conditions. Results also suggested some blends between activation patterns associated with different classes of emotions, particularly for feelings of Wonder or Transcendence. These data reveal a differentiated recruitment across emotions of networks involved in reward, memory, self-reflective, and sensorimotor processes, which may account for the unique richness of musical emotions.
emotion; fMRI; music; striatum; ventro-medial prefrontal cortex
The ages of 6 male patients with the sick sinus syndrome ranged from 10-15 years when their symptoms began. At rest all had a heart rate of 60/min or less. Two had syncopal attacks which threatened life; 1 had only attacks of dizziness; the other 3 had no syncopal attacks but had recurrent attacks of supraventricular tachycardia ('brady-tachycardia syndrome') which were more resistant to drug therapy than is usual in childhood. They were not controlled or suppressed by digoxin when it was given. Substernal pain occurred in 2 patients who had syncope. In all patients the heart rate remained inappropriately slow after exercise and atropine. Cardiac pacemakers were used in the 2 patients with life-threatening syncope. Any patient who has dizziness or syncopal attacks and an inappropriately slow heart rate should have electrocardiograms recorded at rest and after exercise to record the heart rate and to look for abnormal P-waves.
This paper proposes a method to translate human EEG into music, so as to represent mental state by music. The arousal levels of the brain mental state and music emotion are implicitly used as the bridge between the mind world and the music. The arousal level of the brain is based on the EEG features extracted mainly by wavelet analysis, and the music arousal level is related to the musical parameters such as pitch, tempo, rhythm, and tonality. While composing, some music principles (harmonics and structure) were taken into consideration. With EEGs during various sleep stages as an example, the music generated from them had different patterns of pitch, rhythm, and tonality. 35 volunteers listened to the music pieces, and significant difference in music arousal levels was found. It implied that different mental states may be identified by the corresponding music, and so the music from EEG may be a potential tool for EEG monitoring, biofeedback therapy, and so forth.
Specific pieces of music can elicit strong emotions in listeners and, possibly in connection with these emotions, can be remembered even years later. However, episodic memory for emotional music compared with less emotional music has not yet been examined. We investigated whether emotional music is remembered better than less emotional music. Also, we examined the influence of musical structure on memory performance.
Recognition of 40 musical excerpts was investigated as a function of arousal, valence, and emotional intensity ratings of the music. In the first session the participants judged valence and arousal of the musical pieces. One week later, participants listened to the 40 old and 40 new musical excerpts randomly interspersed and were asked to make an old/new decision as well as to indicate arousal and valence of the pieces. Musical pieces that were rated as very positive were recognized significantly better.
Musical excerpts rated as very positive are remembered better. Valence seems to be an important modulator of episodic long-term memory for music. Evidently, strong emotions related to the musical experience facilitate memory formation and retrieval.
Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced) and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic) and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception.
Musicians imagine music during mental rehearsal, when reading from a score, and while composing. An important characteristic of music is its temporality. Among the parameters that vary through time is sound intensity, perceived as patterns of loudness. Studies of mental imagery for melodies (i.e., pitch and rhythm) show interference from concurrent musical pitch and verbal tasks, but how we represent musical changes in loudness is unclear. Theories suggest that our perceptions of loudness change relate to our perceptions of force or effort, implying a motor representation. An experiment was conducted to investigate the modalities that contribute to imagery for loudness change. Musicians performed a within-subjects loudness change recall task, comprising 48 trials. First, participants heard a musical scale played with varying patterns of loudness, which they were asked to remember. There followed an empty interval of 8 s (nil distractor control), or the presentation of a series of four sine tones, or four visual letters or three conductor gestures, also to be remembered. Participants then saw an unfolding score of the notes of the scale, during which they were to imagine the corresponding scale in their mind while adjusting a slider to indicate the imagined changes in loudness. Finally, participants performed a recognition task of the tone, letter, or gesture sequence. Based on the motor hypothesis, we predicted that observing and remembering conductor gestures would impair loudness change scale recall, while observing and remembering tone or letter string stimuli would not. Results support this prediction, with loudness change recalled less accurately in the gestures condition than in the control condition. An effect of musical training suggests that auditory and motor imagery ability may be closely related to domain expertise.
mental imagery; loudness; music; motor processing; melody; working memory
Most perceived parameters of sound (e.g. pitch, duration, timbre) can also be imagined in the absence of sound. These parameters are imagined more veridically by expert musicians than non-experts. Evidence for whether loudness is imagined, however, is conflicting. In music, the question of whether loudness is imagined is particularly relevant due to its role as a principal parameter of performance expression. This study addressed the hypothesis that the veridicality of imagined loudness improves with increasing musical expertise. Experts, novices and non-musicians imagined short passages of well-known classical music under two counterbalanced conditions: 1) while adjusting a slider to indicate imagined loudness of the music and 2) while tapping out the rhythm to indicate imagined timing. Subtests assessed music listening abilities and working memory span to determine whether these factors, also hypothesised to improve with increasing musical expertise, could account for imagery task performance. Similarity between each participant’s imagined and listening loudness profiles and reference recording intensity profiles was assessed using time series analysis and dynamic time warping. The results suggest a widespread ability to imagine the loudness of familiar music. The veridicality of imagined loudness tended to be greatest for the expert musicians, supporting the predicted relationship between musical expertise and musical imagery ability.
Apart from its natural relevance to cognition, music provides a window into the intimate relationships between production, perception, experience, and emotion. Here, emotional responses and neural activity were observed as they evolved together with stimulus parameters over several minutes. Participants listened to a skilled music performance that included the natural fluctuations in timing and sound intensity that musicians use to evoke emotional responses. A mechanical performance of the same piece served as a control. Before and after fMRI scanning, participants reported real-time emotional responses on a 2-dimensional rating scale (arousal and valence) as they listened to each performance. During fMRI scanning, participants listened without reporting emotional responses. Limbic and paralimbic brain areas responded to the expressive dynamics of human music performance, and both emotion and reward related activations during music listening were dependent upon musical training. Moreover, dynamic changes in timing predicted ratings of emotional arousal, as well as real-time changes in neural activity. BOLD signal changes correlated with expressive timing fluctuations in cortical and subcortical motor areas consistent with pulse perception, and in a network consistent with the human mirror neuron system. These findings show that expressive music performance evokes emotion and reward related neural activations, and that music's affective impact on the brains of listeners is altered by musical training. Our observations are consistent with the idea that music performance evokes an emotional response through a form of empathy that is based, at least in part, on the perception of movement and on violations of pulse-based temporal expectancies.
Although human musical performances represent one of the most valuable achievements of mankind, the best musicians perform imperfectly. Musical rhythms are not entirely accurate and thus inevitably deviate from the ideal beat pattern. Nevertheless, computer generated perfect beat patterns are frequently devalued by listeners due to a perceived lack of human touch. Professional audio editing software therefore offers a humanizing feature which artificially generates rhythmic fluctuations. However, the built-in humanizing units are essentially random number generators producing only simple uncorrelated fluctuations. Here, for the first time, we establish long-range fluctuations as an inevitable natural companion of both simple and complex human rhythmic performances. Moreover, we demonstrate that listeners strongly prefer long-range correlated fluctuations in musical rhythms. Thus, the favorable fluctuation type for humanizing interbeat intervals coincides with the one generically inherent in human musical performances.
One hundred and sixty consecutive patients less than 50 years of age (mean 38 years) referred for long term electrocardiographic recording were evaluated retrospectively. Significant cardiac arrhythmias were detected in 51 of 107 (48%) patients examined because of syncope or dizzy spells or both. Of 39 patients examined for cardiac complaints or presumed complex arrhythmias, 15 (38%) had significant arrhythmias. Of 14 patients examined because of otherwise unexplained strokes, nine had slow sinus rates. Of these, one patient had recently undertaken moderately intensive athletic activity and four had been undertaking vigorous athletic activities for several years. All of the 12 active athletes who were followed up on account of syncope or dizzy spells were free of symptoms after reducing their athletic activities. The cardiac rhythm returned to normal in four out of five who underwent repeat long term electrocardiographic recording. It is suggested that vigorous athletic activity in subjects of 30-50 years of age may transform the adaptative bradycardia of the athlete into a condition similar to the embolising sick sinus syndrome.
Despite much recent interest in the clinical neuroscience of music processing, the cognitive organization of music as a domain of non-verbal knowledge has been little studied. Here we addressed this issue systematically in two expert musicians with clinical diagnoses of semantic dementia and Alzheimer’s disease, in comparison with a control group of healthy expert musicians. In a series of neuropsychological experiments, we investigated associative knowledge of musical compositions (musical objects), musical emotions, musical instruments (musical sources) and music notation (musical symbols). These aspects of music knowledge were assessed in relation to musical perceptual abilities and extra-musical neuropsychological functions. The patient with semantic dementia showed relatively preserved recognition of musical compositions and musical symbols despite severely impaired recognition of musical emotions and musical instruments from sound. In contrast, the patient with Alzheimer’s disease showed impaired recognition of compositions, with somewhat better recognition of composer and musical era, and impaired comprehension of musical symbols, but normal recognition of musical emotions and musical instruments from sound. The findings suggest that music knowledge is fractionated, and superordinate musical knowledge is relatively more robust than knowledge of particular music. We propose that music constitutes a distinct domain of non-verbal knowledge but shares certain cognitive organizational features with other brain knowledge systems. Within the domain of music knowledge, dissociable cognitive mechanisms process knowledge derived from physical sources and the knowledge of abstract musical entities.
music; semantic memory; dementia; semantic dementia; Alzheimer’s disease
Patterns of syncope evaluation vary widely among physicians and hospitals. The aim of this study was to assess current diagnostic patterns and medical costs in the evaluation of patients presenting with syncope at the emergency department (ED) or the outpatient department (OPD) of a referral hospital.
Materials and Methods
This study included 171 consecutive patients with syncope, who visited the ED or OPD between January 2009 and July 2009.
The ED group had fewer episodes of syncope [2 (1-2) vs. 2 (1-5), p=0.014] and fewer prodromal symptoms (81.5% vs. 93.3%, p=0.018) than the OPD group. Diagnostic tests were more frequently performed in the ED group than in the OPD group (6.2±1.7 vs. 5.3±2.0; p=0.012). In addition, tests with low diagnostic yields were more frequently used in the ED group than in the OPD group. The total cost of syncope evaluation per patient was higher in the ED group than in the OPD group [823000 (440000-1408000) won vs. 420000 (186000-766000) won, p<0.001].
There were some differences in the clinical characteristics of patients and diagnostic patterns in the evaluation of syncope between the ED and the OPD groups. Therefore, a selective diagnostic approach according to the presentation site is needed to improve diagnostic yields and to reduce the time and costs of evaluation of syncope.
Syncope; diagnosis; cost-benefit analysis
Despite a wealth of evidence for the involvement of the autonomic nervous system (ANS) in health and disease and the ability of music to affect ANS activity, few studies have systematically explored the therapeutic effects of music on ANS dysfunction. Furthermore, when ANS activity is quantified and analyzed, it is usually from a point of convenience rather than from an understanding of its physiological basis. After a review of the experimental and therapeutic literatures exploring music and the ANS, a “Neurovisceral Integration” perspective on the interplay between the central and autonomic nervous systems is introduced, and the associated implications for physiological, emotional, and cognitive health are explored. The construct of heart rate variability is discussed both as an example of this complex interplay and as a useful metric for exploring the sometimes subtle effect of music on autonomic response. Suggestions for future investigations using musical interventions are offered based on this integrative account.
autonomic nervous system; entrainment; heart rate variability; neurovisceral integration; psychophysiological responses
Due to lack of efficacy in recent trials, current guidelines for the treatment of neurally-mediated (vasovagal) syncope do not promote cardiac pacemaker implantation. However, the finding of asystole during head-up tilt –induced (pre)syncope may lead to excessive cardioinhibitory syncope diagnosis and treatment with cardiac pacemakers as blood pressure is often discontinuously measured. Furthermore, physicians may be more inclined to implant cardiac pacemakers in older patients. We hypothesized that true cardioinhibitory syncope in which the decrease in heart rate precedes the fall in blood pressure is a very rare finding which might explain the lack of efficacy of pacemakers in neurally-mediated syncope.
We studied 173 consecutive patients referred for unexplained syncope (114 women, 59 men, 42±1 years, 17±2 syncopal episodes). All had experienced (pre)syncope during head-up tilt testing followed by additional lower body negative suction. We classified hemodynamic responses according to the modified Vasovagal Syncope International Study (VASIS) classification as mixed response (VASIS I), cardioinhibitory without (VASIS IIa) or with asystole (VASIS IIb), and vasodepressor (VASIS III). Then, we defined the exact temporal relationship between hypotension and bradycardia to identify patients with true cardioinhibitory syncope.
Of the (pre)syncopal events during tilt testing, 63% were classified as VASIS I, 6% as VASIS IIb, 2% as VASIS IIa, and 29% as VASIS III. Cardioinhibitory responses (VASIS class II) progressively decreased from the youngest to the oldest age quartile. With more detailed temporal analysis, blood pressure reduction preceded the heart-rate decrease in all but six individuals (97%) overall and in 10 out of 11 patients with asystole (VASIS IIb).
Hypotension precedes bradycardia onset during head-up tilt-induced (pre)syncope in the vast majority of patients, even in those classified as cardioinhibitory syncope according to the modified VASIS classification. Furthermore, cardioinhibitory syncope becomes less frequent with increasing age.
Music notations use both symbolic and spatial representation systems. Novice musicians do not have the training to associate symbolic information with musical identities, such as chords or rhythmic and melodic patterns. They provide an opportunity to explore the mechanisms underpinning multimodal learning when spatial encoding strategies of feature dimensions might be expected to dominate. In this study, we applied a range of transformations (such as time reversal) to short melodies and rhythms and asked novice musicians to identify them with or without the aid of notation. Performance using a purely spatial (graphic) notation was contrasted with the more symbolic, traditional western notation over a series of weekly sessions. The results showed learning effects for both notation types, but performance improved more for graphic notation. This points to greater compatibility of auditory and visual neural codes for novice musicians when using spatial notation, suggesting that pitch and time may be spatially encoded in multimodal associative memory. The findings also point to new strategies for training novice musicians.
spatial manipulation; visual; auditory; encoding; pitch; time; music; notation
The simplest and likeliest assumption concerning the cognitive bases of absolute pitch (AP) is that at its origin there is a particularly skilled function which matches the height of the perceived pitch to the verbal label of the musical tone. Since there is no difference in sound frequency resolution between AP and non-AP (NAP) musicians, the hypothesis of the present study is that the failure of NAP musicians in pitch identification relies mainly in an inability to retrieve the correct verbal label to be assigned to the perceived musical note. The primary hypothesis is that, when asked to identify tones, NAP musicians confuse the verbal labels to be attached to the stimulus on the basis of their phonetic content. Data from two AP tests are reported, in which subjects had to respond in the presence or in the absence of visually presented verbal note labels (fixed Do solmization). Results show that NAP musicians confuse more frequently notes having a similar vowel in the note label. They tend to confuse e.g. a 261 Hz tone (Do) more often with Sol than, e.g., with La. As a second goal, we wondered whether this effect is lateralized, i.e. whether one hemisphere is more responsible than the other in the confusion of notes with similar labels. This question was addressed by observing pitch identification during dichotic listening. Results showed that there is a right hemispheric disadvantage, in NAP but not AP musicians, in the retrieval of the verbal label to be assigned to the perceived pitch. The present results indicate that absolute pitch has strong verbal bases, at least from a cognitive point of view.
Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers.
Twenty adolescents, aged 15–18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs).
The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers.
Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.
Auditory; Event-related potentials (ERPs); Adolescents; Children; Musicians; Mismatch negativity (MMN)
People with Absolute Pitch can categorize musical pitches without a reference, whereas people with tone-color synesthesia can see colors when hearing music. Both of these special populations perceive music in an above-normal manner. In this study we asked whether AP possessors and tone-color synesthetes might recruit specialized neural mechanisms during music listening. Furthermore, we tested the degree to which neural substrates recruited for music listening may be shared between these special populations. AP possessors, tone-color synesthetes, and matched controls rated the perceived arousal levels of musical excerpts in a sparse-sampled fMRI study. Both APs and synesthetes showed enhanced superior temporal gyrus (STG, secondary auditory cortex) activation relative to controls during music listening, with left-lateralized enhancement in the APs and right-lateralized enhancement in the synesthetes. When listening to highly arousing excerpts, AP possessors showed additional activation in the left STG whereas synesthetes showed enhanced activity in the bilateral lingual gyrus and inferior temporal gyrus (late visual areas). Results support both shared and distinct neural enhancements in AP and synesthesia: common enhancements in early cortical mechanisms of perceptual analysis, followed by relative specialization in later association and categorization processes that support the unique behaviors of these special populations during music listening.
This study investigated the effects of voluntarily empathizing with a musical performer (i.e., cognitive empathy) on music-induced emotions and their underlying physiological activity. N = 56 participants watched video-clips of two operatic compositions performed in concerts, with low or high empathy instructions. Heart rate and heart rate variability, skin conductance level (SCL), and respiration rate (RR) were measured during music listening, and music-induced emotions were quantified using the Geneva Emotional Music Scale immediately after music listening. Listening to the aria with sad content in a high empathy condition facilitated the emotion of nostalgia and decreased SCL, in comparison to the low empathy condition. Listening to the song with happy content in a high empathy condition also facilitated the emotion of power and increased RR, in comparison to the low empathy condition. To our knowledge, this study offers the first experimental evidence that cognitive empathy influences emotion psychophysiology during music listening.
Musicians often say that they not only hear, but also “feel” music. To explore the contribution of tactile information in “feeling” musical rhythm, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter recognition task. Subjects discriminated between two types of sequences, ‘duple’ (march-like rhythms) and ‘triple’ (waltz-like rhythms) presented in three conditions: 1) Unimodal inputs (auditory or tactile alone), 2) Various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts, and 3) Simultaneously presented bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70%–85%) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70%–90%) when all of the metrically important notes are assigned to one channel and is reduced to 60% when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90%). Performance drops dramatically when subjects were presented with incongruent auditory cues (10%), as opposed to incongruent tactile cues (60%), demonstrating that auditory input dominates meter perception. We believe that these results are the first demonstration of cross-modal sensory grouping between any two senses.
In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music), visual (musician's movements only), and auditory emotional (music only) displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound) than for emotionally matching music performances (combining the musician's movements with matching emotional sound) as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.
The appearance of the ARS complex in leads V3R and V4R was analysed in a series of 94 patients with acute posterior myocardial infarction. The cases of posterior myocardial infarction with direct signs of injury (ST segment elevation with a rise of 0.5 mm or more of point F and/or QS pattern) in leads V3R and/or V4R were complicated three times as often by atrioventricular block as those in which such signs were absent (66% and 22%, respectively; P smaller than 0.001). When one of these signs was present in leads V3R and/or V4R, the disorder of conduction was "severe" (complete atrioventricular block or sinotrial block with pauses) in half the cases and "unstable" (bradycardia below 50 beats/min; ventricular pause with or without syncope; widening of QRS complex; ventricular hyperexcitability) in one-third, justifying the introduction of a stimulating catheter. Such disorders were found, respectively, only 1 in 7 (14%), and less than 1 in 10 (8%) when these signs were absent (P smaller than 0.001). The association of ST segment elevation and QS pattern was rarer (15 cases) than the isolated finding of either sign. It was found in the most severe disorders of atrioventricular conduction. The changes observed in leads V3R and/or V4R before the appearance of atrioventricular block enable one to predict which patients with posterior myocardial infarction are the most likely to develop atrioventricular block. These electrocardiographic features seem to indicate septal involvement.
The differentiation of vasovagal syncope and epileptic seizure is sometimes problematic, since vasovagal syncope may mimic epileptic seizures in many ways. The present report describes a patient who had been diagnosed and treated as having epilepsy with medically-refractory seizures for 16 years. Often, unlike epileptic seizures, tonic-clonic convulsions and postictal confusion are uncommon features of vasovagal syncope, but these may occur. Our patient was subjected to subcutaneous injection of one ml normal saline, which caused asystole leading to hypoxia and consequently a typical tonic-clonic convulsion. This patient was proved to have vasovagal syncope. The findings in the present case suggest that the possibility of vasovagal syncope should always be taken into consideration when evaluating patients with medically-refractory or unusual pattern of seizures. In such a circumstance, simultaneous video-electroencephalogram/electocardiogram monitoring may help achieve the correct diagnosis.
Epilepsy; syncope; EEG
Music and speech are often cited as characteristically human forms of communication. Both share the features of hierarchical structure, complex sound systems, and sensorimotor sequencing demands, and both are used to convey and influence emotions, among other functions . Both music and speech also prominently use acoustical frequency modulations, perceived as variations in pitch, as part of their communicative repertoire. Given these similarities, and the fact that pitch perception and production involve the same peripheral transduction system (cochlea) and the same production mechanism (vocal tract), it might be natural to assume that pitch processing in speech and music would also depend on the same underlying cognitive and neural mechanisms. In this essay we argue that the processing of pitch information differs significantly for speech and music; specifically, we suggest that there are two pitch-related processing systems, one for more coarse-grained, approximate analysis and one for more fine-grained accurate representation, and that the latter is unique to music. More broadly, this dissociation offers clues about the interface between sensory and motor systems, and highlights the idea that multiple processing streams are a ubiquitous feature of neuro-cognitive architectures.
Pitch changes are an integral part of both spoken language and song. Despite sharing some of the same psychological and neural mechanisms, the authors conclude there are fundamental differences between them.
OBJECTIVE—To define the responses to head up tilt in a large group of normal adult subjects using the most widely employed protocol for tilt testing.
METHODS—127 normal subjects aged 19-88 years (mean (SD), 49 (20) years) without a previous history of syncope underwent tilt testing at 60° for 45 minutes or until syncope intervened. Blood pressure monitoring was performed with digital photoplethysmography, providing continuous, non-invasive, beat to beat heart rate and pressure measurements.
RESULTS—13% of subjects developed vasovagal syncope after a mean (SD) tilt time of 31.7 (12.4) minutes (range 8.5-44.9 minutes). Severe cardioinhibition during syncope was observed less often than is reported in patients investigated for syncope. There were no differences in the age or sex distributions of subjects with positive or negative outcomes, or in the proportions with cardioinhibitory and vasodepressor vasovagal syncope compared with previously reported patient populations. Subjects with negative outcomes showed age related differences in heart rate and blood pressure behaviour throughout tilt.
CONCLUSIONS—False positive results with tilting appear to be common. This has important implications for the use of diagnostic tilt testing. The magnitude of the heart rate and blood pressure changes observed during negative tilts largely invalidates previously suggested criteria for abnormal non-syncopal outcomes.
Keywords: syncope; head up tilt; postural hypotension