The score is a symbolic encoding that describes a piece of music, written according to the conventions of music theory, which must be rendered as sound (e.g., by a performer) before it may be perceived as music by the listener. In this paper we provide a step towards unifying music theory with music perception in terms of the relationship between notated rhythm (i.e., the score) and perceived syncopation. In our experiments we evaluated this relationship by manipulating the score, rendering it as sound and eliciting subjective judgments of syncopation. We used a metronome to provide explicit cues to the prevailing rhythmic structure (as defined in the time signature). Three-bar scores with time signatures of 4/4 and 6/8 were constructed using repeated one-bar rhythm-patterns, with each pattern built from basic half-bar rhythm-components. Our manipulations gave rise to various rhythmic structures, including polyrhythms and rhythms with missing strong- and/or down-beats. Listeners (N = 10) were asked to rate the degree of syncopation they perceived in response to a rendering of each score. We observed higher degrees of syncopation in time signatures of 6/8, for polyrhythms, and for rhythms featuring a missing down-beat. We also found that the location of a rhythm-component within the bar has a significant effect on perceived syncopation. Our findings provide new insight into models of syncopation and point the way towards areas in which the models may be improved.
Music evokes complex emotions beyond pleasant/unpleasant or happy/sad dichotomies usually investigated in neuroscience. Here, we used functional neuroimaging with parametric analyses based on the intensity of felt emotions to explore a wider spectrum of affective responses reported during music listening. Positive emotions correlated with activation of left striatum and insula when high-arousing (Wonder, Joy) but right striatum and orbitofrontal cortex when low-arousing (Nostalgia, Tenderness). Irrespective of their positive/negative valence, high-arousal emotions (Tension, Power, and Joy) also correlated with activations in sensory and motor areas, whereas low-arousal categories (Peacefulness, Nostalgia, and Sadness) selectively engaged ventromedial prefrontal cortex and hippocampus. The right parahippocampal cortex activated in all but positive high-arousal conditions. Results also suggested some blends between activation patterns associated with different classes of emotions, particularly for feelings of Wonder or Transcendence. These data reveal a differentiated recruitment across emotions of networks involved in reward, memory, self-reflective, and sensorimotor processes, which may account for the unique richness of musical emotions.
emotion; fMRI; music; striatum; ventro-medial prefrontal cortex
The ages of 6 male patients with the sick sinus syndrome ranged from 10-15 years when their symptoms began. At rest all had a heart rate of 60/min or less. Two had syncopal attacks which threatened life; 1 had only attacks of dizziness; the other 3 had no syncopal attacks but had recurrent attacks of supraventricular tachycardia ('brady-tachycardia syndrome') which were more resistant to drug therapy than is usual in childhood. They were not controlled or suppressed by digoxin when it was given. Substernal pain occurred in 2 patients who had syncope. In all patients the heart rate remained inappropriately slow after exercise and atropine. Cardiac pacemakers were used in the 2 patients with life-threatening syncope. Any patient who has dizziness or syncopal attacks and an inappropriately slow heart rate should have electrocardiograms recorded at rest and after exercise to record the heart rate and to look for abnormal P-waves.
Music is a powerful medium capable of eliciting a broad range of emotions. Although the relationship between language and music is well documented, relatively little is known about the effects of lyrics and the voice on the emotional processing of music and on listeners' preferences. In the present study, we investigated the effects of vocals in music on participants' perceived valence and arousal in songs. Participants (N = 50) made valence and arousal ratings for familiar songs that were presented with and without the voice. We observed robust effects of vocal content on perceived arousal. Furthermore, we found that the effect of the voice on enhancing arousal ratings is independent of familiarity of the song and differs across genders and age: females were more influenced by vocals than males; furthermore these gender effects were enhanced among older adults. Results highlight the effects of gender and aging in emotion perception and are discussed in terms of the social roles of music.
emotion; music; arousal; perception; gender; aging
This paper proposes a method to translate human EEG into music, so as to represent mental state by music. The arousal levels of the brain mental state and music emotion are implicitly used as the bridge between the mind world and the music. The arousal level of the brain is based on the EEG features extracted mainly by wavelet analysis, and the music arousal level is related to the musical parameters such as pitch, tempo, rhythm, and tonality. While composing, some music principles (harmonics and structure) were taken into consideration. With EEGs during various sleep stages as an example, the music generated from them had different patterns of pitch, rhythm, and tonality. 35 volunteers listened to the music pieces, and significant difference in music arousal levels was found. It implied that different mental states may be identified by the corresponding music, and so the music from EEG may be a potential tool for EEG monitoring, biofeedback therapy, and so forth.
In general, sad music is thought to cause us to experience sadness, which is considered an unpleasant emotion. As a result, the question arises as to why we listen to sad music if it evokes sadness. One possible answer to this question is that we may actually feel positive emotions when we listen to sad music. This suggestion may appear to be counterintuitive; however, in this study, by dividing musical emotion into perceived emotion and felt emotion, we investigated this potential emotional response to music. We hypothesized that felt and perceived emotion may not actually coincide in this respect: sad music would be perceived as sad, but the experience of listening to sad music would evoke positive emotions. A total of 44 participants listened to musical excerpts and provided data on perceived and felt emotions by rating 62 descriptive words or phrases related to emotions on a scale that ranged from 0 (not at all) to 4 (very much). The results revealed that the sad music was perceived to be more tragic, whereas the actual experiences of the participants listening to the sad music induced them to feel more romantic, more blithe, and less tragic emotions than they actually perceived with respect to the same music. Thus, the participants experienced ambivalent emotions when they listened to the sad music. After considering the possible reasons that listeners were induced to experience emotional ambivalence by the sad music, we concluded that the formulation of a new model would be essential for examining the emotions induced by music and that this new model must entertain the possibility that what we experience when listening to music is vicarious emotion.
sad music; vicarious emotion; perceived/felt emotion; ambivalent emotion; pleasant emotion
Specific pieces of music can elicit strong emotions in listeners and, possibly in connection with these emotions, can be remembered even years later. However, episodic memory for emotional music compared with less emotional music has not yet been examined. We investigated whether emotional music is remembered better than less emotional music. Also, we examined the influence of musical structure on memory performance.
Recognition of 40 musical excerpts was investigated as a function of arousal, valence, and emotional intensity ratings of the music. In the first session the participants judged valence and arousal of the musical pieces. One week later, participants listened to the 40 old and 40 new musical excerpts randomly interspersed and were asked to make an old/new decision as well as to indicate arousal and valence of the pieces. Musical pieces that were rated as very positive were recognized significantly better.
Musical excerpts rated as very positive are remembered better. Valence seems to be an important modulator of episodic long-term memory for music. Evidently, strong emotions related to the musical experience facilitate memory formation and retrieval.
Listening to music has been found to reduce acute and chronic pain. The underlying mechanisms are poorly understood; however, emotion and cognitive mechanisms have been suggested to influence the analgesic effect of music. In this study we investigated the influence of familiarity, emotional and cognitive features, and cognitive style on music-induced analgesia. Forty-eight healthy participants were divided into three groups (empathizers, systemizers and balanced) and received acute pain induced by heat while listening to different sounds. Participants listened to unfamiliar Mozart music rated with high valence and low arousal, unfamiliar environmental sounds with similar valence and arousal as the music, an active distraction task (mental arithmetic) and a control, and rated the pain. Data showed that the active distraction led to significantly less pain than did the music or sounds. Both unfamiliar music and sounds reduced pain significantly when compared to the control condition; however, music was no more effective than sound to reduce pain. Furthermore, we found correlations between pain and emotion ratings. Finally, systemizers reported less pain during the mental arithmetic compared with the other two groups. These findings suggest that familiarity may be key in the influence of the cognitive and emotional mechanisms of music-induced analgesia, and that cognitive styles may influence pain perception.
Musicians imagine music during mental rehearsal, when reading from a score, and while composing. An important characteristic of music is its temporality. Among the parameters that vary through time is sound intensity, perceived as patterns of loudness. Studies of mental imagery for melodies (i.e., pitch and rhythm) show interference from concurrent musical pitch and verbal tasks, but how we represent musical changes in loudness is unclear. Theories suggest that our perceptions of loudness change relate to our perceptions of force or effort, implying a motor representation. An experiment was conducted to investigate the modalities that contribute to imagery for loudness change. Musicians performed a within-subjects loudness change recall task, comprising 48 trials. First, participants heard a musical scale played with varying patterns of loudness, which they were asked to remember. There followed an empty interval of 8 s (nil distractor control), or the presentation of a series of four sine tones, or four visual letters or three conductor gestures, also to be remembered. Participants then saw an unfolding score of the notes of the scale, during which they were to imagine the corresponding scale in their mind while adjusting a slider to indicate the imagined changes in loudness. Finally, participants performed a recognition task of the tone, letter, or gesture sequence. Based on the motor hypothesis, we predicted that observing and remembering conductor gestures would impair loudness change scale recall, while observing and remembering tone or letter string stimuli would not. Results support this prediction, with loudness change recalled less accurately in the gestures condition than in the control condition. An effect of musical training suggests that auditory and motor imagery ability may be closely related to domain expertise.
mental imagery; loudness; music; motor processing; melody; working memory
Blood centers rely heavily upon adolescent donors to meet blood demand, but pre-syncope and syncope are more frequent in younger donors. Studies have suggested administration of water prior to donation may reduce syncope and/or pre-syncope in this group.
Study design and methods
We conducted a randomized, controlled trial to establish the effect of pre-loading with 500ml of water on the rate of syncope and pre-syncope in adolescent donors. School collection sites in Eastern Cape Province of South Africa were randomized to receive water or not. Incidence of syncope and pre-syncope was compared between randomization groups using multivariable logistic regression.
Of 2,464 study participants, 1,337 received water and 1,127 did not; groups differed slightly by gender and race. Syncope or pre-syncope was seen in 23 (1.7%) of the treatment and 18 (1.6%) of the control arm subjects. After adjusting for race, gender, age and donation history, there was no difference in outcome between the water versus no water arms (adjusted odds ratio (OR) = 0.80 (95% CI 0.42–1.53). Black donors had 7-fold lower odds of syncope or pre-syncope than their white counterparts (adjusted OR 0.14, 95% CI 0.04–0.47).
Preloading adolescent donors with 500ml of water did not have a major effect in reducing syncope and pre-syncope in South African adolescent donors. Our adolescent donors had a lower overall syncope and pre-syncope rate than similar populations in the USA, limiting the statistical power of the study. We confirmed much lower rates of syncope and pre-syncope among young Black donors.
Blood Donors; Syncope; Randomized control trial; South-Africa; Adolescent
Most perceived parameters of sound (e.g. pitch, duration, timbre) can also be imagined in the absence of sound. These parameters are imagined more veridically by expert musicians than non-experts. Evidence for whether loudness is imagined, however, is conflicting. In music, the question of whether loudness is imagined is particularly relevant due to its role as a principal parameter of performance expression. This study addressed the hypothesis that the veridicality of imagined loudness improves with increasing musical expertise. Experts, novices and non-musicians imagined short passages of well-known classical music under two counterbalanced conditions: 1) while adjusting a slider to indicate imagined loudness of the music and 2) while tapping out the rhythm to indicate imagined timing. Subtests assessed music listening abilities and working memory span to determine whether these factors, also hypothesised to improve with increasing musical expertise, could account for imagery task performance. Similarity between each participant’s imagined and listening loudness profiles and reference recording intensity profiles was assessed using time series analysis and dynamic time warping. The results suggest a widespread ability to imagine the loudness of familiar music. The veridicality of imagined loudness tended to be greatest for the expert musicians, supporting the predicted relationship between musical expertise and musical imagery ability.
Williams syndrome (WS), a genetic, neurodevelopmental disorder, is of keen interest to music cognition researchers because of its characteristic auditory sensitivities and emotional responsiveness to music. However, actual musical perception and production abilities are more variable. We examined musicality in WS through the lens of amusia and explored how their musical perception abilities related to their auditory sensitivities, musical production skills, and emotional responsiveness to music. In our sample of 73 adolescents and adults with WS, 11% met criteria for amusia, which is higher than the 4% prevalence rate reported in the typically developing (TD) population. Amusia was not related to auditory sensitivities but was related to musical training. Performance on the amusia measure strongly predicted musical skill but not emotional responsiveness to music, which was better predicted by general auditory sensitivities. This study represents the first time amusia has been examined in a population with a known neurodevelopmental genetic disorder with a range of cognitive abilities. Results have implications for the relationships across different levels of auditory processing, musical skill development, and emotional responsiveness to music, as well as the understanding of gene-brain-behavior relationships in individuals with WS and TD individuals with and without amusia.
Williams syndrome; music; amusia; pitch perception; auditory sensitivity
Apart from its natural relevance to cognition, music provides a window into the intimate relationships between production, perception, experience, and emotion. Here, emotional responses and neural activity were observed as they evolved together with stimulus parameters over several minutes. Participants listened to a skilled music performance that included the natural fluctuations in timing and sound intensity that musicians use to evoke emotional responses. A mechanical performance of the same piece served as a control. Before and after fMRI scanning, participants reported real-time emotional responses on a 2-dimensional rating scale (arousal and valence) as they listened to each performance. During fMRI scanning, participants listened without reporting emotional responses. Limbic and paralimbic brain areas responded to the expressive dynamics of human music performance, and both emotion and reward related activations during music listening were dependent upon musical training. Moreover, dynamic changes in timing predicted ratings of emotional arousal, as well as real-time changes in neural activity. BOLD signal changes correlated with expressive timing fluctuations in cortical and subcortical motor areas consistent with pulse perception, and in a network consistent with the human mirror neuron system. These findings show that expressive music performance evokes emotion and reward related neural activations, and that music's affective impact on the brains of listeners is altered by musical training. Our observations are consistent with the idea that music performance evokes an emotional response through a form of empathy that is based, at least in part, on the perception of movement and on violations of pulse-based temporal expectancies.
One hundred and sixty consecutive patients less than 50 years of age (mean 38 years) referred for long term electrocardiographic recording were evaluated retrospectively. Significant cardiac arrhythmias were detected in 51 of 107 (48%) patients examined because of syncope or dizzy spells or both. Of 39 patients examined for cardiac complaints or presumed complex arrhythmias, 15 (38%) had significant arrhythmias. Of 14 patients examined because of otherwise unexplained strokes, nine had slow sinus rates. Of these, one patient had recently undertaken moderately intensive athletic activity and four had been undertaking vigorous athletic activities for several years. All of the 12 active athletes who were followed up on account of syncope or dizzy spells were free of symptoms after reducing their athletic activities. The cardiac rhythm returned to normal in four out of five who underwent repeat long term electrocardiographic recording. It is suggested that vigorous athletic activity in subjects of 30-50 years of age may transform the adaptative bradycardia of the athlete into a condition similar to the embolising sick sinus syndrome.
Although human musical performances represent one of the most valuable achievements of mankind, the best musicians perform imperfectly. Musical rhythms are not entirely accurate and thus inevitably deviate from the ideal beat pattern. Nevertheless, computer generated perfect beat patterns are frequently devalued by listeners due to a perceived lack of human touch. Professional audio editing software therefore offers a humanizing feature which artificially generates rhythmic fluctuations. However, the built-in humanizing units are essentially random number generators producing only simple uncorrelated fluctuations. Here, for the first time, we establish long-range fluctuations as an inevitable natural companion of both simple and complex human rhythmic performances. Moreover, we demonstrate that listeners strongly prefer long-range correlated fluctuations in musical rhythms. Thus, the favorable fluctuation type for humanizing interbeat intervals coincides with the one generically inherent in human musical performances.
Falls are frequent in the elderly and affect mortality, morbidity, loss of functional capacity and institutionalization. In the older patient the incidence of falls can sometimes be underestimated, even in the absence of a clear cognitive impairment, because it is often difficult to reconstruct the dynamics. It is quite common that forms due to syncope are associated with retrograde amnesia and in 40 to 60% of the cases falls happen in the absence of witnesses. The pathogenesis of falls is often multifactorial, due to physiological age-related changes or more properly pathological factors, or due to the environment. The identification of risk factors is essential in the planning of preventive measures. Syncope is one of major causes of falls. About 20% of cardiovascular syncope in patients older than 70 appears as a fall and more than 20% of older people with Carotid Sinus Syndrome complain of falls as well as syncope. These data clearly state that older patients with history of falls should undergo a cardiovascular and neuroautonomic assessment besides the survey of other risk factors. Multifactorial assessment requires a synergy of various specialists. The geriatrician coordinates the multidisciplinary intervention in order to make the most effective evaluation of the risk of falling, searching for all predisposing factors, aiming towards a program of prevention. In clear pathological conditions it is possible to enact a specific treatment. Particular attention must indeed be paid to the re-evaluation of drug therapy, with dose adjustments or withdrawal especially for antihypertensive, diuretics and benzodiazepines. The Guidelines of the American Geriatrics Society recommend modification of environmental hazards, training paths, hip protectors and appropriate use of support tools (sticks, walkers), which can be effective elements of a multifactorial intervention program. Balance exercises are also recommended. In conclusion, an initial assessment, supported by a comprehensive cardiovascular and neuroautonomic evaluation, allows for reaching a final diagnosis in most cases, demonstrating a key role in the real identification of the etiology of the fall and implementing the treatment measures.
falls; elderly; multifactorial assessment; prevention strategies
Patterns of syncope evaluation vary widely among physicians and hospitals. The aim of this study was to assess current diagnostic patterns and medical costs in the evaluation of patients presenting with syncope at the emergency department (ED) or the outpatient department (OPD) of a referral hospital.
Materials and Methods
This study included 171 consecutive patients with syncope, who visited the ED or OPD between January 2009 and July 2009.
The ED group had fewer episodes of syncope [2 (1-2) vs. 2 (1-5), p=0.014] and fewer prodromal symptoms (81.5% vs. 93.3%, p=0.018) than the OPD group. Diagnostic tests were more frequently performed in the ED group than in the OPD group (6.2±1.7 vs. 5.3±2.0; p=0.012). In addition, tests with low diagnostic yields were more frequently used in the ED group than in the OPD group. The total cost of syncope evaluation per patient was higher in the ED group than in the OPD group [823000 (440000-1408000) won vs. 420000 (186000-766000) won, p<0.001].
There were some differences in the clinical characteristics of patients and diagnostic patterns in the evaluation of syncope between the ED and the OPD groups. Therefore, a selective diagnostic approach according to the presentation site is needed to improve diagnostic yields and to reduce the time and costs of evaluation of syncope.
Syncope; diagnosis; cost-benefit analysis
Despite much recent interest in the clinical neuroscience of music processing, the cognitive organization of music as a domain of non-verbal knowledge has been little studied. Here we addressed this issue systematically in two expert musicians with clinical diagnoses of semantic dementia and Alzheimer’s disease, in comparison with a control group of healthy expert musicians. In a series of neuropsychological experiments, we investigated associative knowledge of musical compositions (musical objects), musical emotions, musical instruments (musical sources) and music notation (musical symbols). These aspects of music knowledge were assessed in relation to musical perceptual abilities and extra-musical neuropsychological functions. The patient with semantic dementia showed relatively preserved recognition of musical compositions and musical symbols despite severely impaired recognition of musical emotions and musical instruments from sound. In contrast, the patient with Alzheimer’s disease showed impaired recognition of compositions, with somewhat better recognition of composer and musical era, and impaired comprehension of musical symbols, but normal recognition of musical emotions and musical instruments from sound. The findings suggest that music knowledge is fractionated, and superordinate musical knowledge is relatively more robust than knowledge of particular music. We propose that music constitutes a distinct domain of non-verbal knowledge but shares certain cognitive organizational features with other brain knowledge systems. Within the domain of music knowledge, dissociable cognitive mechanisms process knowledge derived from physical sources and the knowledge of abstract musical entities.
music; semantic memory; dementia; semantic dementia; Alzheimer’s disease
Despite a wealth of evidence for the involvement of the autonomic nervous system (ANS) in health and disease and the ability of music to affect ANS activity, few studies have systematically explored the therapeutic effects of music on ANS dysfunction. Furthermore, when ANS activity is quantified and analyzed, it is usually from a point of convenience rather than from an understanding of its physiological basis. After a review of the experimental and therapeutic literatures exploring music and the ANS, a “Neurovisceral Integration” perspective on the interplay between the central and autonomic nervous systems is introduced, and the associated implications for physiological, emotional, and cognitive health are explored. The construct of heart rate variability is discussed both as an example of this complex interplay and as a useful metric for exploring the sometimes subtle effect of music on autonomic response. Suggestions for future investigations using musical interventions are offered based on this integrative account.
autonomic nervous system; entrainment; heart rate variability; neurovisceral integration; psychophysiological responses
A 32-year-old Spanish man presented to hospital after a second episode of syncope immediately following exercise. On admission, his vitals signs were stable and he had a regular heart rate of 60 bpm. ECG and transthoracic echocardiogram were normal. He completed 15 min of a BRUCE protocol exercise test. One minute and ten seconds into recovery, he lost consciousness. His ECG demonstrated sinus arrest with pauses of up to 5 s and subsequently junctional ectopy. After 38 s, his heart returned to sinus rhythm at a rate of 140 bpm and he regained consciousness. Vasovagal syncope following exercise in the absence of structural heart disease is uncommonly reported. When cases of exercise-related syncope in patients with structurally normal hearts have been reported, the typical patient is a young male who engages in physical training. Treatment strategies in patients suffering with vasovagal asystole are necessarily empirical, and careful judgement based on the specific features of the individual cases needs to be employed.
Due to lack of efficacy in recent trials, current guidelines for the treatment of neurally-mediated (vasovagal) syncope do not promote cardiac pacemaker implantation. However, the finding of asystole during head-up tilt –induced (pre)syncope may lead to excessive cardioinhibitory syncope diagnosis and treatment with cardiac pacemakers as blood pressure is often discontinuously measured. Furthermore, physicians may be more inclined to implant cardiac pacemakers in older patients. We hypothesized that true cardioinhibitory syncope in which the decrease in heart rate precedes the fall in blood pressure is a very rare finding which might explain the lack of efficacy of pacemakers in neurally-mediated syncope.
We studied 173 consecutive patients referred for unexplained syncope (114 women, 59 men, 42±1 years, 17±2 syncopal episodes). All had experienced (pre)syncope during head-up tilt testing followed by additional lower body negative suction. We classified hemodynamic responses according to the modified Vasovagal Syncope International Study (VASIS) classification as mixed response (VASIS I), cardioinhibitory without (VASIS IIa) or with asystole (VASIS IIb), and vasodepressor (VASIS III). Then, we defined the exact temporal relationship between hypotension and bradycardia to identify patients with true cardioinhibitory syncope.
Of the (pre)syncopal events during tilt testing, 63% were classified as VASIS I, 6% as VASIS IIb, 2% as VASIS IIa, and 29% as VASIS III. Cardioinhibitory responses (VASIS class II) progressively decreased from the youngest to the oldest age quartile. With more detailed temporal analysis, blood pressure reduction preceded the heart-rate decrease in all but six individuals (97%) overall and in 10 out of 11 patients with asystole (VASIS IIb).
Hypotension precedes bradycardia onset during head-up tilt-induced (pre)syncope in the vast majority of patients, even in those classified as cardioinhibitory syncope according to the modified VASIS classification. Furthermore, cardioinhibitory syncope becomes less frequent with increasing age.
Music notations use both symbolic and spatial representation systems. Novice musicians do not have the training to associate symbolic information with musical identities, such as chords or rhythmic and melodic patterns. They provide an opportunity to explore the mechanisms underpinning multimodal learning when spatial encoding strategies of feature dimensions might be expected to dominate. In this study, we applied a range of transformations (such as time reversal) to short melodies and rhythms and asked novice musicians to identify them with or without the aid of notation. Performance using a purely spatial (graphic) notation was contrasted with the more symbolic, traditional western notation over a series of weekly sessions. The results showed learning effects for both notation types, but performance improved more for graphic notation. This points to greater compatibility of auditory and visual neural codes for novice musicians when using spatial notation, suggesting that pitch and time may be spatially encoded in multimodal associative memory. The findings also point to new strategies for training novice musicians.
spatial manipulation; visual; auditory; encoding; pitch; time; music; notation
Musical rhythm perception is a natural human ability that involves complex cognitive processes. Rhythm refers to the organization of events in time, and musical rhythms have an underlying hierarchical metrical structure. The metrical structure induces the feeling of a beat and the extent to which a rhythm induces the feeling of a beat is referred to as its metrical strength. Binary ratios are the most frequent interval ratio in musical rhythms. Rhythms with hierarchical binary ratios are better discriminated and reproduced than rhythms with hierarchical non-binary ratios. However, it remains unclear whether a superiority of serial binary over non-binary ratios in rhythm perception and reproduction exists. In addition, how different types of serial ratios influence the metrical strength of rhythms remains to be elucidated. The present study investigated serial binary vs. non-binary ratios in a reproduction task. Rhythms formed with exclusively binary (1:2:4:8), non-binary integer (1:3:5:6), and non-integer (1:2.3:5.3:6.4) ratios were examined within a constant meter. The results showed that the 1:2:4:8 rhythm type was more accurately reproduced than the 1:3:5:6 and 1:2.3:5.3:6.4 rhythm types, and the 1:2.3:5.3:6.4 rhythm type was more accurately reproduced than the 1:3:5:6 rhythm type. Further analyses showed that reproduction performance was better predicted by the distribution pattern of event occurrences within an inter-beat interval, than by the coincidence of events with beats, or the magnitude and complexity of interval ratios. Whereas rhythm theories and empirical data emphasize the role of the coincidence of events with beats in determining metrical strength and predicting rhythm performance, the present results suggest that rhythm processing may be better understood when the distribution pattern of event occurrences is taken into account. These results provide new insights into the mechanisms underlining musical rhythm perception.
music; rhythm; binary; ratio; beat; distribution pattern
The simplest and likeliest assumption concerning the cognitive bases of absolute pitch (AP) is that at its origin there is a particularly skilled function which matches the height of the perceived pitch to the verbal label of the musical tone. Since there is no difference in sound frequency resolution between AP and non-AP (NAP) musicians, the hypothesis of the present study is that the failure of NAP musicians in pitch identification relies mainly in an inability to retrieve the correct verbal label to be assigned to the perceived musical note. The primary hypothesis is that, when asked to identify tones, NAP musicians confuse the verbal labels to be attached to the stimulus on the basis of their phonetic content. Data from two AP tests are reported, in which subjects had to respond in the presence or in the absence of visually presented verbal note labels (fixed Do solmization). Results show that NAP musicians confuse more frequently notes having a similar vowel in the note label. They tend to confuse e.g. a 261 Hz tone (Do) more often with Sol than, e.g., with La. As a second goal, we wondered whether this effect is lateralized, i.e. whether one hemisphere is more responsible than the other in the confusion of notes with similar labels. This question was addressed by observing pitch identification during dichotic listening. Results showed that there is a right hemispheric disadvantage, in NAP but not AP musicians, in the retrieval of the verbal label to be assigned to the perceived pitch. The present results indicate that absolute pitch has strong verbal bases, at least from a cognitive point of view.
The present study used a temporal bisection task with short (<2 s) and long (>2 s) stimulus durations to investigate the effect on time estimation of several musical parameters associated with emotional changes in affective valence and arousal. In order to manipulate the positive and negative valence of music, Experiments 1 and 2 contrasted the effect of musical structure with pieces played normally and backwards, which were judged to be pleasant and unpleasant, respectively. This effect of valence was combined with a subjective arousal effect by changing the tempo of the musical pieces (fast vs. slow) (Experiment 1) or their instrumentation (orchestral vs. piano pieces). The musical pieces were indeed judged more arousing with a fast than with a slow tempo and with an orchestral than with a piano timbre. In Experiment 3, affective valence was also tested by contrasting the effect of tonal (pleasant) vs. atonal (unpleasant) versions of the same musical pieces. The results showed that the effect of tempo in music, associated with a subjective arousal effect, was the major factor that produced time distortions with time being judged longer for fast than for slow tempi. When the tempo was held constant, no significant effect of timbre on the time judgment was found although the orchestral music was judged to be more arousing than the piano music. Nevertheless, emotional valence did modulate the tempo effect on time perception, the pleasant music being judged shorter than the unpleasant music.
time perception; music; emotion; valence; arousal