In witnessing face-to-face conversation, observers perceive authentic communication according to the social contingency of nonverbal feedback cues (‘back-channeling’) by non-speaking interactors. The current study investigated the generality of this function by focusing on nonverbal communication in musical improvisation. A perceptual experiment was conducted to test whether observers can reliably identify genuine versus fake (mismatched) duos from musicians’ nonverbal cues, and how this judgement is affected by observers’ musical background and rhythm perception skill. Twenty-four musicians were recruited to perform duo improvisations, which included solo episodes, in two styles: standard jazz (where rhythm is based on a regular pulse) or free improvisation (where rhythm is non-pulsed). The improvisations were recorded using a motion capture system to generate 16 ten-second point-light displays (with audio) of the soloist and the silent non-soloing musician (‘back-channeler’). Sixteen further displays were created by splicing soloists with back-channelers from different duos. Participants (N = 60) with various musical backgrounds were asked to rate the point-light displays as either real or fake. Results indicated that participants were sensitive to the real/fake distinction in the free improvisation condition independently of musical experience. Individual differences in rhythm perception skill did not account for performance in the free condition, but were positively correlated with accuracy in the standard jazz condition. These findings suggest that the perception of back-channeling in free improvisation is not dependent on music-specific skills but is a general ability. The findings invite further study of the links between interpersonal dynamics in conversation and musical interaction.
Language and music, two of the most unique human cognitive abilities, are combined in song, rendering it an ecological model for comparing speech and music cognition. The present study was designed to determine whether words and melodies in song are processed interactively or independently, and to examine the influence of attention on the processing of words and melodies in song. Event-Related brain Potentials (ERPs) and behavioral data were recorded while non-musicians listened to pairs of sung words (prime and target) presented in four experimental conditions: same word, same melody; same word, different melody; different word, same melody; different word, different melody. Participants were asked to attend to either the words or the melody, and to perform a same/different task. In both attentional tasks, different word targets elicited an N400 component, as predicted based on previous results. Most interestingly, different melodies (sung with the same word) elicited an N400 component followed by a late positive component. Finally, ERP and behavioral data converged in showing interactions between the linguistic and melodic dimensions of sung words. The finding that the N400 effect, a well-established marker of semantic processing, was modulated by musical melody in song suggests that variations in musical features affect word processing in sung language. Implications of the interactions between words and melody are discussed in light of evidence for shared neural processing resources between the phonological/semantic aspects of language and the melodic/harmonic aspects of music.
Musicians have been used extensively to study neural correlates of long-term practice, but no studies have investigated the specific effects of training musical creativity. Here, we used human functional MRI to measure brain activity during improvisation in a sample of 39 professional pianists with varying backgrounds in classical and jazz piano playing. We found total hours of improvisation experience to be negatively associated with activity in frontoparietal executive cortical areas. In contrast, improvisation training was positively associated with functional connectivity of the bilateral dorsolateral prefrontal cortices, dorsal premotor cortices, and presupplementary areas. The effects were significant when controlling for hours of classical piano practice and age. These results indicate that even neural mechanisms involved in creative behaviors, which require a flexible online generation of novel and meaningful output, can be automated by training. Second, improvisational musical training can influence functional brain properties at a network level. We show that the greater functional connectivity seen in experienced improvisers may reflect a more efficient exchange of information within associative networks of importance for musical creativity.
Creativity; expertise; fMRI; improvisation; music; plasticity
The inability to hear music well may contribute to decreased quality of life for cochlear implant (CI) users. Researchers have reported recently on the generally poor ability of CI users’ to perceive music, and a few researchers have reported on the enjoyment of music by CI users. However, the relation between music perception skills and music enjoyment is much less explored. Only one study has attempted to predict CI users’ enjoyment and perception of music from the users’ demographic variables and other perceptual skills (Gfeller et al., 2008). Gfeller’s results yielded different predictive relationships for music perception and music enjoyment, and the relationships were weak, at best.
The first goal of this study is to clarify the nature and relationship between music perception skills and musical enjoyment for CI users, by employing a battery of music tests. The second goal is to determine whether normal hearing (NH) subjects, listening with a CI-simulation, can be used as a model to represent actual CI users for either music enjoyment ratings or music perception tasks.
A prospective, cross-sectional observational study. Original music stimuli (unprocessed) were presented to CI users, and music stimuli processed with CI-simulation software were presented to twenty NH listeners (CIsim). As a control, original music stimuli were also presented to five other NH listeners. All listeners appraised twenty-four musical excerpts, performed music perception tests, and filled out a musical background questionnaire. Music perception tests were the Appreciation of Music in Cochlear Implantees (AMICI), Montreal Battery for Evaluation of Amusia (MBEA), Melodic Contour Identification (MCI), and University of Washington Clinical Assessment of Music Perception (UW-CAMP).
Twenty-five NH adults (22 – 56 years old), recruited from the local and research communities, participated in the study. Ten adult CI users (46 – 80 years old), recruited from the patient population of the local adult cochlear implant program, also participated in this study.
Data Collection and Analysis
Musical excerpts were appraised using a 7-point rating scale and music perception tests were scored as designed. Analysis of variance was performed on appraisal ratings, perception scores, and questionnaire data with listener group as a factor. Correlations were computed between musical appraisal ratings and perceptual scores on each music test.
Music is rated as more enjoyable by CI users than by the NH listeners hearing music through a simulation (CIsim), and the difference is statistically significant. For roughly half of the music perception tests, there are no statistically significant differences between the performance of the CI users and of the CIsim listeners. Generally, correlations between appraisal ratings and music perception scores are weak or non-existent.
NH adults listening to music that has been processed through a CI-simulation program are a reasonable model for actual CI users for many music perception skills, but not for rating musical enjoyment. For CI users, the apparent independence of music perception skills and music enjoyment (as assessed by appraisals), indicates that music enjoyment should not be assumed and should be examined explicitly.
music; cochlear implant; cochlear implant simulation; timbre; melody; appraisal
We present an EEG study of two music improvisation experiments. Professional musicians with high level of improvisation skills were asked to perform music either according to notes (composed music) or in improvisation. Each piece of music was performed in two different modes: strict mode and “let-go” mode. Synchronized EEG data was measured from both musicians and listeners. We used one of the most reliable causality measures: conditional Mutual Information from Mixed Embedding (MIME), to analyze directed correlations between different EEG channels, which was combined with network theory to construct both intra-brain and cross-brain networks. Differences were identified in intra-brain neural networks between composed music and improvisation and between strict mode and “let-go” mode. Particular brain regions such as frontal, parietal and temporal regions were found to play a key role in differentiating the brain activities between different playing conditions. By comparing the level of degree centralities in intra-brain neural networks, we found a difference between the response of musicians and the listeners when comparing the different playing conditions.
Timbre is the attribute of sound that allows humans and other animals to distinguish among different sound sources. Studies based on psychophysical judgments of musical timbre, ecological analyses of sound's physical characteristics as well as machine learning approaches have all suggested that timbre is a multifaceted attribute that invokes both spectral and temporal sound features. Here, we explored the neural underpinnings of musical timbre. We used a neuro-computational framework based on spectro-temporal receptive fields, recorded from over a thousand neurons in the mammalian primary auditory cortex as well as from simulated cortical neurons, augmented with a nonlinear classifier. The model was able to perform robust instrument classification irrespective of pitch and playing style, with an accuracy of 98.7%. Using the same front end, the model was also able to reproduce perceptual distance judgments between timbres as perceived by human listeners. The study demonstrates that joint spectro-temporal features, such as those observed in the mammalian primary auditory cortex, are critical to provide the rich-enough representation necessary to account for perceptual judgments of timbre by human listeners, as well as recognition of musical instruments.
Music is a complex acoustic experience that we often take for granted. Whether sitting at a symphony hall or enjoying a melody over earphones, we have no difficulty identifying the instruments playing, following various beats, or simply distinguishing a flute from an oboe. Our brains rely on a number of sound attributes to analyze the music in our ears. These attributes can be straightforward like loudness or quite complex like the identity of the instrument. A major contributor to our ability to recognize instruments is what is formally called ‘timbre’. Of all perceptual attributes of music, timbre remains the most mysterious and least amenable to a simple mathematical abstraction. In this work, we examine the neural underpinnings of musical timbre in an attempt to both define its perceptual space and explore the processes underlying timbre-based recognition. We propose a scheme based on responses observed at the level of mammalian primary auditory cortex and show that it can accurately predict sound source recognition and perceptual timbre judgments by human listeners. The analyses presented here strongly suggest that rich representations such as those observed in auditory cortex are critical in mediating timbre percepts.
Understanding everyday behavior relies heavily upon understanding our ability to improvise, how we are able to continuously anticipate and adapt in order to coordinate with our environment and others. Here we consider the ability of musicians to improvise, where they must spontaneously coordinate their actions with co-performers in order to produce novel musical expressions. Investigations of this behavior have traditionally focused on describing the organization of cognitive structures. The focus, here, however, is on the ability of the time-evolving patterns of inter-musician movement coordination as revealed by the mathematical tools of complex dynamical systems to provide a new understanding of what potentiates the novelty of spontaneous musical action. We demonstrate this approach through the application of cross wavelet spectral analysis, which isolates the strength and patterning of the behavioral coordination that occurs between improvising musicians across a range of nested time-scales. Revealing the sophistication of the previously unexplored dynamics of movement coordination between improvising musicians is an important step toward understanding how creative musical expressions emerge from the spontaneous coordination of multiple musical bodies.
music improvisation; self-organization; movement coordination; complex dynamical systems; multiscale analysis
Although the cochlear implant (CI) is successful for understanding speech in patients with severe to profound hearing loss, listening to music is a challenging task to most CI listeners. The purpose of this study was to assess music perception ability and to provide clinically useful information regarding CI rehabilitation.
Ten normal hearing and ten CI listeners with implant experience, ranging 2 to 6 years, participated in the subtests of pitch, rhythm, melody, and instrument. A synthesized piano tone was used as musical stimuli. Participants were asked to discriminate two different tones during the pitch subtest. The rhythm subtest was constructed with sets of five, six, and seven intervals. The melody & instrument subtests assessed recognition of eight familiar melodies and five musical instruments from a closed set, respectively.
CI listeners performed significantly poorer than normal hearing listeners in pitch, melody, and instrument identification tasks. No significant differences were observed in rhythm recognition between groups. Correlations were not found between music perception ability and word recognition scores.
The results are consistent with previous studies that have shown that pitch, melody, and instrument identifications are difficult to identify for CI users. Our results can provide fundamental information concerning the development of CI rehabilitation tools.
Cochlear implant; Music perception; Korean cochlear implant listener
Recent evidence suggests that age might affect the ability of listeners to process fundamental frequency cues in speech, and that this difficulty might impact the ability of older listeners to use and combine envelope and fine structure cues available in simulations of electro-acoustic and cochlear-implant hearing. The purpose of this paper is to examine whether this difficulty extends to music. Specially, this study focuses on whether older listeners have a decreased ability to utilize and combine different types of cues in the perception of melody and timbre.
A group of older listeners with normal to near-normal hearing and a group of younger listeners with normal hearing participated in the melody and timbre recognition tasks of the University of Washington Clinical Assessment of Music Perception (CAMP) test. The recognition tasks were completed for five different processing conditions: 1) an unprocessed condition; 2) an eight-channel vocoding condition that simulated a traditional cochlear implant and contained temporal envelope cues; 3) a simulation of electro-acoustic stimulation (sEAS) that included a low-pass acoustic component and high-pass vocoded portion, and which provided fine structure and envelope cues; 4) a condition that included only the low-pass acoustic portion of the sEAS and 5) a condition that included only the high-frequency vocoded portion of the sEAS stimulus.
Melody recognition was excellent for both younger and older listeners in the conditions containing the unprocessed stimuli, the full sEAS stimuli, and the low-pass sEAS stimuli. Melody recognition was significantly worse in the cochlear-implant simulation condition, especially for the older group of listeners. Performance on the timbre task was highest for the unprocessed condition, and progressively decreased for the sEAS and cochlear-implant simulation conditions. Compared to younger listeners, older listeners had significantly poorer timbre recognition for all processing conditions. For melody recognition, the unprocessed low-frequency portion of the sEAS stimulus was the primary factor determining improved performance in the sEAS condition compared to the cochlear-implant simulation. For timbre recognition, both the unprocessed low-frequency and high-frequency vocoded portions of the sEAS stimulus contributed to sEAS improvement in the younger group. In contrast, most listeners in the older group were not able to take advantage of the high-frequency vocoded portion of the sEAS stimulus for timbre recognition.
The results of this simulation study support the idea that older listeners will have diminished timbre and melody perception in traditional cochlear-implant listening due to degraded envelope processing. The findings also suggest that music perception by older listeners with cochlear implants will be improved with the addition of low-frequency residual hearing. However, these improvements might not be comparable for all dimensions of music perception. That is, more improvement might be evident for tasks that rely primarily on the low-frequency portion of the electro-acoustic stimulus (e.g., melody recognition) and less improvement might be evident in situations that require across-frequency integration of cues (e.g., timbre perception).
hearing loss; age; melody recognition; timbre perception; cochlear implant; periodicity; fine structure
Listeners' musical perception is influenced by cues that can be stored in short-term memory (e.g., within the same musical piece) or long-term memory (e.g., based on one's own musical culture). The present study tested how these cues (referred to as, respectively, proximal and distal cues) influence the perception of music from an unfamiliar culture. Western listeners who were naïve to Gamelan music judged completeness and coherence for newly constructed melodies in the Balinese gamelan tradition. In these melodies, we manipulated the final tone with three possibilities: the original gong tone, an in-scale tone replacement or an out-of-scale tone replacement. We also manipulated the musical timbre employed in Gamelan pieces. We hypothesized that novice listeners are sensitive to out-of-scale changes, but not in-scale changes, and that this might be influenced by the more unfamiliar timbre created by Gamelan “sister” instruments whose harmonics beat with the harmonics of the other instrument, creating a timbrally “shimmering” sound. The results showed: (1) out-of-scale endings were judged less complete than original gong and in-scale endings; (2) for melodies played with “sister” instruments, in-scale endings were judged as less complete than original endings. Furthermore, melodies using the original scale tones were judged more coherent than melodies containing few or multiple tone replacements; melodies played on single instruments were judged more coherent than the same melodies played on sister instruments. Additionally, there is some indication of within-session statistical learning, with expectations for the initially-novel materials developing during the course of the experiment. The data suggest the influence of both distal cues (e.g., previously unfamiliar timbres) and proximal cues (within the same sequence and over the experimental session) on the perception of melodies from other cultural systems based on unfamiliar tunings and scale systems.
expectations; timbre; tuning; gamelan; cross cultural
To what extent and in what arenas do collaborating musicians need to understand what they are doing in the same way? Two experienced jazz musicians who had never previously played together played three improvisations on a jazz standard (“It Could Happen to You”) on either side of a visual barrier. They were then immediately interviewed separately about the performances, their musical intentions, and their judgments of their partner's musical intentions, both from memory and prompted with the audiorecordings of the performances. Statements from both (audiorecorded) interviews as well as statements from an expert listener were extracted and anonymized. Two months later, the performers listened to the recordings and rated the extent to which they endorsed each statement. Performers endorsed statements they themselves had generated more often than statements by their performing partner and the expert listener; their overall level of agreement with each other was greater than chance but moderate to low, with disagreements about the quality of one of the performances and about who was responsible for it. The quality of the performances combined with the disparities in agreement suggest that, at least in this case study, fully shared understanding of what happened is not essential for successful improvisation. The fact that the performers endorsed an expert listener's statements more than their partner's argues against a simple notion that performers' interpretations are always privileged relative to an outsider's.
shared understanding; intersubjectivity; collaboration; communication; interaction; improvisation; music; jazz
Learning to play a musical piece is a prime example of complex sensorimotor learning in humans. Recent studies using electroencephalography (EEG) and transcranial magnetic stimulation (TMS) indicate that passive listening to melodies previously rehearsed by subjects on a musical instrument evokes differential brain activation as compared with unrehearsed melodies. These changes were already evident after 20–30 minutes of training. The exact brain regions involved in these differential brain responses have not yet been delineated.
Using functional MRI (fMRI), we investigated subjects who passively listened to simple piano melodies from two conditions: In the ‘actively learned melodies’ condition subjects learned to play a piece on the piano during a short training session of a maximum of 30 minutes before the fMRI experiment, and in the ‘passively learned melodies’ condition subjects listened passively to and were thus familiarized with the piece. We found increased fMRI responses to actively compared with passively learned melodies in the left anterior insula, extending to the left fronto-opercular cortex. The area of significant activation overlapped the insular sensorimotor hand area as determined by our meta-analysis of previous functional imaging studies.
Our results provide evidence for differential brain responses to action-related sounds after short periods of learning in the human insular cortex. As the hand sensorimotor area of the insular cortex appears to be involved in these responses, re-activation of movement representations stored in the insular sensorimotor cortex may have contributed to the observed effect. The insular cortex may therefore play a role in the initial learning phase of action-perception associations.
Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of auditory interference. Motor imagery aided pitch accuracy overall when interference conditions were manipulated at encoding (Experiment 1) but not at retrieval (Experiment 2). Thus, skilled performers' imagery abilities had distinct influences on encoding and retrieval of musical sequences.
sensorimotor learning; auditory imagery; motor imagery; individual differences; music performance
•Creativity and personality of classical, jazz, and folk musicians was compared.•Jazz musicians show higher divergent thinking ability.•Jazz musicians accomplish more creative musical activities and achievements.•Classical musicians show a high amount of practice and win more competitions.•Folk musicians are more extraverted and publish more musical productions.
The music genre of jazz is commonly associated with creativity. However, this association has hardly been formally tested. Therefore, this study aimed at examining whether jazz musicians actually differ in creativity and personality from musicians of other music genres. We compared students of classical music, jazz music, and folk music with respect to their musical activities, psychometric creativity and different aspects of personality. In line with expectations, jazz musicians are more frequently engaged in extracurricular musical activities, and also complete a higher number of creative musical achievements. Additionally, jazz musicians show higher ideational creativity as measured by divergent thinking tasks, and tend to be more open to new experiences than classical musicians. This study provides first empirical evidence that jazz musicians show particularly high creativity with respect to domain-specific musical accomplishments but also in terms of domain-general indicators of divergent thinking ability that may be relevant for musical improvisation. The findings are further discussed with respect to differences in formal and informal learning approaches between music genres.
Music genre; Creativity; Personality; Divergent thinking; Music learning
Research and outcomes with cochlear implants (CIs) have revealed a dichotomy in the cues necessary for speech and music recognition. CI devices typically transmit 16–22 spectral channels, each modulated slowly in time. This coarse representation provides enough information to support speech understanding in quiet and rhythmic perception in music, but not enough to support speech understanding in noise or melody recognition. Melody recognition requires some capacity for complex pitch perception, which in turn depends strongly on access to spectral fine structure cues. Thus, temporal envelope cues are adequate for speech perception under optimal listening conditions, while spectral fine structure cues are needed for music perception. In this paper, we present recent experiments that directly measure CI users’ melodic pitch perception using a melodic contour identification (MCI) task. While normal-hearing (NH) listeners’ performance was consistently high across experiments, MCI performance was highly variable across CI users. CI users’ MCI performance was significantly affected by instrument timbre, as well as by the presence of a competing instrument. In general, CI users had great difficulty extracting melodic pitch from complex stimuli. However, musically-experienced CI users often performed as well as NH listeners, and MCI training in less experienced subjects greatly improved performance. With fixed constraints on spectral resolution, such as it occurs with hearing loss or an auditory prosthesis, training and experience can provide a considerable improvements in music perception and appreciation.
cochlear implant; music perception; melodic contour identification
Our ability to listen selectively to single sound sources in complex auditory environments is termed “auditory stream segregation.”This ability is affected by peripheral disorders such as hearing loss, as well as plasticity in central processing such as occurs with musical training. Brain plasticity induced by musical training can enhance the ability to segregate sound, leading to improvements in a variety of auditory abilities. The melody segregation ability of 12 cochlear-implant recipients was tested using a new method to determine the perceptual distance needed to segregate a simple 4-note melody from a background of interleaved random-pitch distractor notes. In experiment 1, participants rated the difficulty of segregating the melody from distracter notes. Four physical properties of the distracter notes were changed. In experiment 2, listeners were asked to rate the dissimilarity between melody patterns whose notes differed on the four physical properties simultaneously. Multidimensional scaling analysis transformed the dissimilarity ratings into perceptual distances. Regression between physical and perceptual cues then derived the minimal perceptual distance needed to segregate the melody. The most efficient streaming cue for CI users was loudness. For the normal hearing listeners without musical backgrounds, a greater difference on the perceptual dimension correlated to the temporal envelope is needed for stream segregation in CI users. No differences in streaming efficiency were found between the perceptual dimensions linked to the F0 and the spectral envelope. Combined with our previous results in normally-hearing musicians and non-musicians, the results show that differences in training as well as differences in peripheral auditory processing (hearing impairment and the use of a hearing device) influences the way that listeners use different acoustic cues for segregating interleaved musical streams.
auditory streaming; cochlear implant; music training; melody segregation; hearing impairment; pitch; loudness; timbre
While the cochlear implant provides many deaf patients with good speech understanding in quiet, music perception and appreciation with the cochlear implant remains a major challenge for most cochlear implant users. The present study investigated whether a closed-set melodic contour identification (MCI) task could be used to quantify cochlear implant users’ ability to recognize musical melodies and whether MCI performance could be improved with moderate auditory training. The present study also compared MCI performance with familiar melody identification (FMI) performance, with and without MCI training.
For the MCI task, test stimuli were melodic contours composed of 5 notes of equal duration whose frequencies corresponded to musical intervals. The interval between successive notes in each contour was varied between 1 and 5 semitones; the “root note” of the contours was also varied (A3, A4, and A5). Nine distinct musical patterns were generated for each interval and root note condition, resulting in a total of 135 musical contours. The identification of these melodic contours was measured in 11 cochlear implant users. FMI was also evaluated in the same subjects; recognition of 12 familiar melodies was tested with and without rhythm cues. MCI was also trained in 6 subjects, using custom software and melodic contours presented in a different frequency range from that used for testing.
Results showed that MCI recognition performance was highly variable among cochlear implant users, ranging from 14% to 91% correct. For most subjects, MCI performance improved as the number of semitones between successive notes was increased; performance was slightly lower for the A3 root note condition. Mean FMI performance was 58% correct when rhythm cues were preserved and 29% correct when rhythm cues were removed. Statistical analyses revealed no significant correlation between MCI performance and FMI performance (with or without rhythmic cues). However, MCI performance was significantly correlated with vowel recognition performance; FMI performance was not correlated with cochlear implant subjects’ phoneme recognition performance. Preliminary results also showed that the MCI training improved all subjects’ MCI performance; the improved MCI performance also generalized to improved FMI performance.
Preliminary data indicate that the closed-set MCI task is a viable approach toward quantifying an important component of cochlear implant users’ music perception. The improvement in MCI performance and generalization to FMI performance with training suggests that MCI training may be useful for improving cochlear implant users’ music perception and appreciation; such training may be necessary to properly evaluate patient performance, as acute measures may underestimate the amount of musical information transmitted by the cochlear implant device and received by cochlear implant listeners.
Salient sensory experiences often have a strong emotional tone, but the neuropsychological relations between perceptual characteristics of sensory objects and the affective information they convey remain poorly defined. Here we addressed the relationship between sound identity and emotional information using music. In two experiments, we investigated whether perception of emotions is influenced by altering the musical instrument on which the music is played, independently of other musical features. In the first experiment, 40 novel melodies each representing one of four emotions (happiness, sadness, fear, or anger) were each recorded on four different instruments (an electronic synthesizer, a piano, a violin, and a trumpet), controlling for melody, tempo, and loudness between instruments. Healthy participants (23 young adults aged 18–30 years, 24 older adults aged 58–75 years) were asked to select which emotion they thought each musical stimulus represented in a four-alternative forced-choice task. Using a generalized linear mixed model we found a significant interaction between instrument and emotion judgement with a similar pattern in young and older adults (p < .0001 for each age group). The effect was not attributable to musical expertise. In the second experiment using the same melodies and experimental design, the interaction between timbre and perceived emotion was replicated (p < .05) in another group of young adults for novel synthetic timbres designed to incorporate timbral cues to particular emotions. Our findings show that timbre (instrument identity) independently affects the perception of emotions in music after controlling for other acoustic, cognitive, and performance factors.
Timbre; Emotion; Music; Auditory object
Apart from its natural relevance to cognition, music provides a window into the intimate relationships between production, perception, experience, and emotion. Here, emotional responses and neural activity were observed as they evolved together with stimulus parameters over several minutes. Participants listened to a skilled music performance that included the natural fluctuations in timing and sound intensity that musicians use to evoke emotional responses. A mechanical performance of the same piece served as a control. Before and after fMRI scanning, participants reported real-time emotional responses on a 2-dimensional rating scale (arousal and valence) as they listened to each performance. During fMRI scanning, participants listened without reporting emotional responses. Limbic and paralimbic brain areas responded to the expressive dynamics of human music performance, and both emotion and reward related activations during music listening were dependent upon musical training. Moreover, dynamic changes in timing predicted ratings of emotional arousal, as well as real-time changes in neural activity. BOLD signal changes correlated with expressive timing fluctuations in cortical and subcortical motor areas consistent with pulse perception, and in a network consistent with the human mirror neuron system. These findings show that expressive music performance evokes emotion and reward related neural activations, and that music's affective impact on the brains of listeners is altered by musical training. Our observations are consistent with the idea that music performance evokes an emotional response through a form of empathy that is based, at least in part, on the perception of movement and on violations of pulse-based temporal expectancies.
An extensive body of literature indicates that cochlear implants are effective in supporting speech perception of persons with severe to profound hearing losses who do not benefit to any great extent from conventional hearing aids. Adult CI recipients tend to show significant improvement in speech perception within 3 months following implantation as a result of mere experience. Furthermore, CI recipients continue to show modest improvement as long as 5 years post implantation. In contrast, data taken from single testing protocols of music perception and appraisal indicate that CIs are less than ideal in transmitting important structural features of music, such as pitch, melody and timbre. However, there is presently little information documenting changes in music perception or appraisal over extended time as a result of mere experience.
This study examined two basic questions: 1) Do adult CI recipients show significant improvement in perceptual acuity or appraisal of specific music listening tasks when tested in two consecutive years? 2) If there are tasks for which CI recipients show significant improvement with time, are there particular demographic variables that predict those CI recipients most likely to show improvement with extended CI use?
A longitudinal cohort study. Implant recipients return annually for visits to the clinic.
The study included 209 adult cochlear implant recipients with at least 9 months implant experience before their first year measurement.
Data collection and analysis
Outcomes were measured on the patient’s annual visit in two consecutive years. Paired t-tests were used to test for significant improvement from one year to the next. Those variables demonstrating significant improvement were subjected to regression analyses performed to detect the demographic variables useful in predicting said improvement.
There were no significant differences in music perception outcomes as a function of type of device or processing strategy used. Only familiar melody recognition (FMR) and recognition of melody excerpts with lyrics (MERT-L) showed significant improvement from one year to the next. After controlling for the baseline value, hearing aid use, months of use, music listening habits after implantation and formal musical training in elementary school were significant predictors of FMR improvement. Bilateral CI use, formal musical training in high school and beyond, and a measure of sequential cognitive processing were significant predictors of MERT-L improvement.
These adult CI recipients as a result of mere experience demonstrated fairly consistent music perception and appraisal on measures gathered in two consecutive years. Gains made tend to be modest, and can be associated with characteristics such as use of hearing aids, listening experiences, or bilateral use (in the case of lyrics). These results have implications for counseling of CI recipients with regard to realistic expectations and strategies for enhancing music perception and enjoyment.
Cochlear implant; Music; Cognitive; Speech Perception
Understanding speech in a background of competing noise is challenging, especially for individuals with hearing loss or deficits in auditory processing ability. The ability to hear in background noise cannot be predicted from the audiogram, an assessment of peripheral hearing ability; therefore, it is important to consider the impact of central and cognitive factors on speech-in-noise perception. Auditory processing in complex environments is reflected in neural encoding of pitch, timing, and timbre, the crucial elements of speech and music. Musical expertise in processing pitch, timing, and timbre may transfer to enhancements in speech-in-noise perception due to shared neural pathways for speech and music. Through cognitive-sensory interactions, musicians develop skills enabling them to selectively listen to relevant signals embedded in a network of melodies and harmonies, and this experience leads in turn to enhanced ability to focus on one voice in a background of other voices. Here we review recent work examining the biological mechanisms of speech and music perception and the potential for musical experience to ameliorate speech-in-noise listening difficulties.
Brain stem; music; speech in noise; timing; pitch
This paper examines the idea that attraction to music is generated at a cognitive level through the formation and activation of networks of interlinked “nodes.” Although the networks involved are vast, the basic mechanism for activating the links is relatively simple. Two comprehensive cognitive-behavioral models of musical engagement are examined with the aim of identifying the underlying cognitive mechanisms and processes involved in musical experience. A “dynamical minimalism” approach (after Nowak, 2004) is applied to re-interpret musical engagement (listening, performing, composing, or imagining any of these) and to revise the latest version of the reciprocal-feedback model (RFM) of music processing. Specifically, a single cognitive mechanism of “spreading activation” through previously associated networks is proposed as a pleasurable outcome of musical engagement. This mechanism underlies the dynamic interaction of the various components of the RFM, and can thereby explain the generation of positive affects in the listener’s musical experience. This includes determinants of that experience stemming from the characteristics of the individual engaging in the musical activity (whether listener, composer, improviser, or performer), the situation and contexts (e.g., social factors), and the music (e.g., genre, structural features). The theory calls for new directions for future research, two being (1) further investigation of the components of the RFM to better understand musical experience and (2) more rigorous scrutiny of common findings about the salience of familiarity in musical experience and preference.
musical experience; cognitive model; reciprocal-feedback model; spreading activation; mind-body; neural networks; preference; familiarity
Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.
statistical learning; information theory; entropy; expectation; auditory cognition; music; melody
To investigate the neural substrates that underlie spontaneous musical performance, we examined improvisation in professional jazz pianists using functional MRI. By employing two paradigms that differed widely in musical complexity, we found that improvisation (compared to production of over-learned musical sequences) was consistently characterized by a dissociated pattern of activity in the prefrontal cortex: extensive deactivation of dorsolateral prefrontal and lateral orbital regions with focal activation of the medial prefrontal (frontal polar) cortex. Such a pattern may reflect a combination of psychological processes required for spontaneous improvisation, in which internally motivated, stimulus-independent behaviors unfold in the absence of central processes that typically mediate self-monitoring and conscious volitional control of ongoing performance. Changes in prefrontal activity during improvisation were accompanied by widespread activation of neocortical sensorimotor areas (that mediate the organization and execution of musical performance) as well as deactivation of limbic structures (that regulate motivation and emotional tone). This distributed neural pattern may provide a cognitive context that enables the emergence of spontaneous creative activity.
Functional Magnetic Resonance Imaging (fMRI) was used to study the activation of cerebral motor networks during auditory perception of music in professional keyboard musicians (n = 12). The activation paradigm implied that subjects listened to two-part polyphonic music, while either critically appraising the performance or imagining they were performing themselves. Two-part polyphonic audition and bimanual motor imagery circumvented a hemisphere bias associated with the convention of playing the melody with the right hand. Both tasks activated ventral premotor and auditory cortices, bilaterally, and the right anterior parietal cortex, when contrasted to 12 musically unskilled controls. Although left ventral premotor activation was increased during imagery (compared to judgment), bilateral dorsal premotor and right posterior-superior parietal activations were quite unique to motor imagery. The latter suggests that musicians not only recruited their manual motor repertoire but also performed a spatial transformation from the vertically perceived pitch axis (high and low sound) to the horizontal axis of the keyboard. Imagery-specific activations in controls were seen in left dorsal parietal-premotor and supplementary motor cortices. Although these activations were less strong compared to musicians, this overlapping distribution indicated the recruitment of a general ‘mirror-neuron’ circuitry. These two levels of sensori-motor transformations point towards common principles by which the brain organizes audition-driven music performance and visually guided task performance.