PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (579535)

Clipboard (0)
None

Related Articles

1.  Words and Melody Are Intertwined in Perception of Sung Words: EEG and Behavioral Evidence 
PLoS ONE  2010;5(3):e9889.
Language and music, two of the most unique human cognitive abilities, are combined in song, rendering it an ecological model for comparing speech and music cognition. The present study was designed to determine whether words and melodies in song are processed interactively or independently, and to examine the influence of attention on the processing of words and melodies in song. Event-Related brain Potentials (ERPs) and behavioral data were recorded while non-musicians listened to pairs of sung words (prime and target) presented in four experimental conditions: same word, same melody; same word, different melody; different word, same melody; different word, different melody. Participants were asked to attend to either the words or the melody, and to perform a same/different task. In both attentional tasks, different word targets elicited an N400 component, as predicted based on previous results. Most interestingly, different melodies (sung with the same word) elicited an N400 component followed by a late positive component. Finally, ERP and behavioral data converged in showing interactions between the linguistic and melodic dimensions of sung words. The finding that the N400 effect, a well-established marker of semantic processing, was modulated by musical melody in song suggests that variations in musical features affect word processing in sung language. Implications of the interactions between words and melody are discussed in light of evidence for shared neural processing resources between the phonological/semantic aspects of language and the melodic/harmonic aspects of music.
doi:10.1371/journal.pone.0009889
PMCID: PMC2847603  PMID: 20360991
2.  Connecting to Create: Expertise in Musical Improvisation Is Associated with Increased Functional Connectivity between Premotor and Prefrontal Areas 
The Journal of Neuroscience  2014;34(18):6156-6163.
Musicians have been used extensively to study neural correlates of long-term practice, but no studies have investigated the specific effects of training musical creativity. Here, we used human functional MRI to measure brain activity during improvisation in a sample of 39 professional pianists with varying backgrounds in classical and jazz piano playing. We found total hours of improvisation experience to be negatively associated with activity in frontoparietal executive cortical areas. In contrast, improvisation training was positively associated with functional connectivity of the bilateral dorsolateral prefrontal cortices, dorsal premotor cortices, and presupplementary areas. The effects were significant when controlling for hours of classical piano practice and age. These results indicate that even neural mechanisms involved in creative behaviors, which require a flexible online generation of novel and meaningful output, can be automated by training. Second, improvisational musical training can influence functional brain properties at a network level. We show that the greater functional connectivity seen in experienced improvisers may reflect a more efficient exchange of information within associative networks of importance for musical creativity.
doi:10.1523/JNEUROSCI.4769-13.2014
PMCID: PMC4004805  PMID: 24790186
Creativity; expertise; fMRI; improvisation; music; plasticity
3.  Music Perception and Appraisal: Cochlear Implant Users and Simulated CI Listening 
Background
The inability to hear music well may contribute to decreased quality of life for cochlear implant (CI) users. Researchers have reported recently on the generally poor ability of CI users’ to perceive music, and a few researchers have reported on the enjoyment of music by CI users. However, the relation between music perception skills and music enjoyment is much less explored. Only one study has attempted to predict CI users’ enjoyment and perception of music from the users’ demographic variables and other perceptual skills (Gfeller et al., 2008). Gfeller’s results yielded different predictive relationships for music perception and music enjoyment, and the relationships were weak, at best.
Purpose
The first goal of this study is to clarify the nature and relationship between music perception skills and musical enjoyment for CI users, by employing a battery of music tests. The second goal is to determine whether normal hearing (NH) subjects, listening with a CI-simulation, can be used as a model to represent actual CI users for either music enjoyment ratings or music perception tasks.
Research Design
A prospective, cross-sectional observational study. Original music stimuli (unprocessed) were presented to CI users, and music stimuli processed with CI-simulation software were presented to twenty NH listeners (CIsim). As a control, original music stimuli were also presented to five other NH listeners. All listeners appraised twenty-four musical excerpts, performed music perception tests, and filled out a musical background questionnaire. Music perception tests were the Appreciation of Music in Cochlear Implantees (AMICI), Montreal Battery for Evaluation of Amusia (MBEA), Melodic Contour Identification (MCI), and University of Washington Clinical Assessment of Music Perception (UW-CAMP).
Study Sample
Twenty-five NH adults (22 – 56 years old), recruited from the local and research communities, participated in the study. Ten adult CI users (46 – 80 years old), recruited from the patient population of the local adult cochlear implant program, also participated in this study.
Data Collection and Analysis
Musical excerpts were appraised using a 7-point rating scale and music perception tests were scored as designed. Analysis of variance was performed on appraisal ratings, perception scores, and questionnaire data with listener group as a factor. Correlations were computed between musical appraisal ratings and perceptual scores on each music test.
Results
Music is rated as more enjoyable by CI users than by the NH listeners hearing music through a simulation (CIsim), and the difference is statistically significant. For roughly half of the music perception tests, there are no statistically significant differences between the performance of the CI users and of the CIsim listeners. Generally, correlations between appraisal ratings and music perception scores are weak or non-existent.
Conclusions
NH adults listening to music that has been processed through a CI-simulation program are a reasonable model for actual CI users for many music perception skills, but not for rating musical enjoyment. For CI users, the apparent independence of music perception skills and music enjoyment (as assessed by appraisals), indicates that music enjoyment should not be assumed and should be examined explicitly.
doi:10.3766/jaaa.23.5.6
PMCID: PMC3400338  PMID: 22533978
music; cochlear implant; cochlear implant simulation; timbre; melody; appraisal
4.  The Causal Inference of Cortical Neural Networks during Music Improvisations 
PLoS ONE  2014;9(12):e112776.
We present an EEG study of two music improvisation experiments. Professional musicians with high level of improvisation skills were asked to perform music either according to notes (composed music) or in improvisation. Each piece of music was performed in two different modes: strict mode and “let-go” mode. Synchronized EEG data was measured from both musicians and listeners. We used one of the most reliable causality measures: conditional Mutual Information from Mixed Embedding (MIME), to analyze directed correlations between different EEG channels, which was combined with network theory to construct both intra-brain and cross-brain networks. Differences were identified in intra-brain neural networks between composed music and improvisation and between strict mode and “let-go” mode. Particular brain regions such as frontal, parietal and temporal regions were found to play a key role in differentiating the brain activities between different playing conditions. By comparing the level of degree centralities in intra-brain neural networks, we found a difference between the response of musicians and the listeners when comparing the different playing conditions.
doi:10.1371/journal.pone.0112776
PMCID: PMC4260787  PMID: 25489852
5.  Music in Our Ears: The Biological Bases of Musical Timbre Perception 
PLoS Computational Biology  2012;8(11):e1002759.
Timbre is the attribute of sound that allows humans and other animals to distinguish among different sound sources. Studies based on psychophysical judgments of musical timbre, ecological analyses of sound's physical characteristics as well as machine learning approaches have all suggested that timbre is a multifaceted attribute that invokes both spectral and temporal sound features. Here, we explored the neural underpinnings of musical timbre. We used a neuro-computational framework based on spectro-temporal receptive fields, recorded from over a thousand neurons in the mammalian primary auditory cortex as well as from simulated cortical neurons, augmented with a nonlinear classifier. The model was able to perform robust instrument classification irrespective of pitch and playing style, with an accuracy of 98.7%. Using the same front end, the model was also able to reproduce perceptual distance judgments between timbres as perceived by human listeners. The study demonstrates that joint spectro-temporal features, such as those observed in the mammalian primary auditory cortex, are critical to provide the rich-enough representation necessary to account for perceptual judgments of timbre by human listeners, as well as recognition of musical instruments.
Author Summary
Music is a complex acoustic experience that we often take for granted. Whether sitting at a symphony hall or enjoying a melody over earphones, we have no difficulty identifying the instruments playing, following various beats, or simply distinguishing a flute from an oboe. Our brains rely on a number of sound attributes to analyze the music in our ears. These attributes can be straightforward like loudness or quite complex like the identity of the instrument. A major contributor to our ability to recognize instruments is what is formally called ‘timbre’. Of all perceptual attributes of music, timbre remains the most mysterious and least amenable to a simple mathematical abstraction. In this work, we examine the neural underpinnings of musical timbre in an attempt to both define its perceptual space and explore the processes underlying timbre-based recognition. We propose a scheme based on responses observed at the level of mammalian primary auditory cortex and show that it can accurately predict sound source recognition and perceptual timbre judgments by human listeners. The analyses presented here strongly suggest that rich representations such as those observed in auditory cortex are critical in mediating timbre percepts.
doi:10.1371/journal.pcbi.1002759
PMCID: PMC3486808  PMID: 23133363
6.  Music Perception Ability of Korean Adult Cochlear Implant Listeners 
Objectives
Although the cochlear implant (CI) is successful for understanding speech in patients with severe to profound hearing loss, listening to music is a challenging task to most CI listeners. The purpose of this study was to assess music perception ability and to provide clinically useful information regarding CI rehabilitation.
Methods
Ten normal hearing and ten CI listeners with implant experience, ranging 2 to 6 years, participated in the subtests of pitch, rhythm, melody, and instrument. A synthesized piano tone was used as musical stimuli. Participants were asked to discriminate two different tones during the pitch subtest. The rhythm subtest was constructed with sets of five, six, and seven intervals. The melody & instrument subtests assessed recognition of eight familiar melodies and five musical instruments from a closed set, respectively.
Results
CI listeners performed significantly poorer than normal hearing listeners in pitch, melody, and instrument identification tasks. No significant differences were observed in rhythm recognition between groups. Correlations were not found between music perception ability and word recognition scores.
Conclusion
The results are consistent with previous studies that have shown that pitch, melody, and instrument identifications are difficult to identify for CI users. Our results can provide fundamental information concerning the development of CI rehabilitation tools.
doi:10.3342/ceo.2012.5.S1.S53
PMCID: PMC3369983  PMID: 22701773
Cochlear implant; Music perception; Korean cochlear implant listener
7.  Expectations in culturally unfamiliar music: influences of proximal and distal cues and timbral characteristics 
Listeners' musical perception is influenced by cues that can be stored in short-term memory (e.g., within the same musical piece) or long-term memory (e.g., based on one's own musical culture). The present study tested how these cues (referred to as, respectively, proximal and distal cues) influence the perception of music from an unfamiliar culture. Western listeners who were naïve to Gamelan music judged completeness and coherence for newly constructed melodies in the Balinese gamelan tradition. In these melodies, we manipulated the final tone with three possibilities: the original gong tone, an in-scale tone replacement or an out-of-scale tone replacement. We also manipulated the musical timbre employed in Gamelan pieces. We hypothesized that novice listeners are sensitive to out-of-scale changes, but not in-scale changes, and that this might be influenced by the more unfamiliar timbre created by Gamelan “sister” instruments whose harmonics beat with the harmonics of the other instrument, creating a timbrally “shimmering” sound. The results showed: (1) out-of-scale endings were judged less complete than original gong and in-scale endings; (2) for melodies played with “sister” instruments, in-scale endings were judged as less complete than original endings. Furthermore, melodies using the original scale tones were judged more coherent than melodies containing few or multiple tone replacements; melodies played on single instruments were judged more coherent than the same melodies played on sister instruments. Additionally, there is some indication of within-session statistical learning, with expectations for the initially-novel materials developing during the course of the experiment. The data suggest the influence of both distal cues (e.g., previously unfamiliar timbres) and proximal cues (within the same sequence and over the experimental session) on the perception of melodies from other cultural systems based on unfamiliar tunings and scale systems.
doi:10.3389/fpsyg.2013.00789
PMCID: PMC3819523  PMID: 24223562
expectations; timbre; tuning; gamelan; cross cultural
8.  Jazz improvisers' shared understanding: a case study 
To what extent and in what arenas do collaborating musicians need to understand what they are doing in the same way? Two experienced jazz musicians who had never previously played together played three improvisations on a jazz standard (“It Could Happen to You”) on either side of a visual barrier. They were then immediately interviewed separately about the performances, their musical intentions, and their judgments of their partner's musical intentions, both from memory and prompted with the audiorecordings of the performances. Statements from both (audiorecorded) interviews as well as statements from an expert listener were extracted and anonymized. Two months later, the performers listened to the recordings and rated the extent to which they endorsed each statement. Performers endorsed statements they themselves had generated more often than statements by their performing partner and the expert listener; their overall level of agreement with each other was greater than chance but moderate to low, with disagreements about the quality of one of the performances and about who was responsible for it. The quality of the performances combined with the disparities in agreement suggest that, at least in this case study, fully shared understanding of what happened is not essential for successful improvisation. The fact that the performers endorsed an expert listener's statements more than their partner's argues against a simple notion that performers' interpretations are always privileged relative to an outsider's.
doi:10.3389/fpsyg.2014.00808
PMCID: PMC4126153  PMID: 25152740
shared understanding; intersubjectivity; collaboration; communication; interaction; improvisation; music; jazz
9.  A Rapid Sound-Action Association Effect in Human Insular Cortex 
PLoS ONE  2007;2(2):e259.
Background
Learning to play a musical piece is a prime example of complex sensorimotor learning in humans. Recent studies using electroencephalography (EEG) and transcranial magnetic stimulation (TMS) indicate that passive listening to melodies previously rehearsed by subjects on a musical instrument evokes differential brain activation as compared with unrehearsed melodies. These changes were already evident after 20–30 minutes of training. The exact brain regions involved in these differential brain responses have not yet been delineated.
Methodology/Principal Finding
Using functional MRI (fMRI), we investigated subjects who passively listened to simple piano melodies from two conditions: In the ‘actively learned melodies’ condition subjects learned to play a piece on the piano during a short training session of a maximum of 30 minutes before the fMRI experiment, and in the ‘passively learned melodies’ condition subjects listened passively to and were thus familiarized with the piece. We found increased fMRI responses to actively compared with passively learned melodies in the left anterior insula, extending to the left fronto-opercular cortex. The area of significant activation overlapped the insular sensorimotor hand area as determined by our meta-analysis of previous functional imaging studies.
Conclusions/Significance
Our results provide evidence for differential brain responses to action-related sounds after short periods of learning in the human insular cortex. As the hand sensorimotor area of the insular cortex appears to be involved in these responses, re-activation of movement representations stored in the insular sensorimotor cortex may have contributed to the observed effect. The insular cortex may therefore play a role in the initial learning phase of action-perception associations.
doi:10.1371/journal.pone.0000259
PMCID: PMC1800344  PMID: 17327919
10.  Auditory and motor imagery modulate learning in music performance 
Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of auditory interference. Motor imagery aided pitch accuracy overall when interference conditions were manipulated at encoding (Experiment 1) but not at retrieval (Experiment 2). Thus, skilled performers' imagery abilities had distinct influences on encoding and retrieval of musical sequences.
doi:10.3389/fnhum.2013.00320
PMCID: PMC3696840  PMID: 23847495
sensorimotor learning; auditory imagery; motor imagery; individual differences; music performance
11.  Creativity and personality in classical, jazz and folk musicians 
Highlights
•Creativity and personality of classical, jazz, and folk musicians was compared.•Jazz musicians show higher divergent thinking ability.•Jazz musicians accomplish more creative musical activities and achievements.•Classical musicians show a high amount of practice and win more competitions.•Folk musicians are more extraverted and publish more musical productions.
The music genre of jazz is commonly associated with creativity. However, this association has hardly been formally tested. Therefore, this study aimed at examining whether jazz musicians actually differ in creativity and personality from musicians of other music genres. We compared students of classical music, jazz music, and folk music with respect to their musical activities, psychometric creativity and different aspects of personality. In line with expectations, jazz musicians are more frequently engaged in extracurricular musical activities, and also complete a higher number of creative musical achievements. Additionally, jazz musicians show higher ideational creativity as measured by divergent thinking tasks, and tend to be more open to new experiences than classical musicians. This study provides first empirical evidence that jazz musicians show particularly high creativity with respect to domain-specific musical accomplishments but also in terms of domain-general indicators of divergent thinking ability that may be relevant for musical improvisation. The findings are further discussed with respect to differences in formal and informal learning approaches between music genres.
doi:10.1016/j.paid.2014.01.064
PMCID: PMC3989052  PMID: 24895472
Music genre; Creativity; Personality; Divergent thinking; Music learning
12.  Melodic Contour Identification and Music Perception by Cochlear Implant Users 
Research and outcomes with cochlear implants (CIs) have revealed a dichotomy in the cues necessary for speech and music recognition. CI devices typically transmit 16–22 spectral channels, each modulated slowly in time. This coarse representation provides enough information to support speech understanding in quiet and rhythmic perception in music, but not enough to support speech understanding in noise or melody recognition. Melody recognition requires some capacity for complex pitch perception, which in turn depends strongly on access to spectral fine structure cues. Thus, temporal envelope cues are adequate for speech perception under optimal listening conditions, while spectral fine structure cues are needed for music perception. In this paper, we present recent experiments that directly measure CI users’ melodic pitch perception using a melodic contour identification (MCI) task. While normal-hearing (NH) listeners’ performance was consistently high across experiments, MCI performance was highly variable across CI users. CI users’ MCI performance was significantly affected by instrument timbre, as well as by the presence of a competing instrument. In general, CI users had great difficulty extracting melodic pitch from complex stimuli. However, musically-experienced CI users often performed as well as NH listeners, and MCI training in less experienced subjects greatly improved performance. With fixed constraints on spectral resolution, such as it occurs with hearing loss or an auditory prosthesis, training and experience can provide a considerable improvements in music perception and appreciation.
doi:10.1111/j.1749-6632.2009.04551.x
PMCID: PMC3627487  PMID: 19673835
cochlear implant; music perception; melodic contour identification
13.  Dynamic Emotional and Neural Responses to Music Depend on Performance Expression and Listener Experience 
PLoS ONE  2010;5(12):e13812.
Apart from its natural relevance to cognition, music provides a window into the intimate relationships between production, perception, experience, and emotion. Here, emotional responses and neural activity were observed as they evolved together with stimulus parameters over several minutes. Participants listened to a skilled music performance that included the natural fluctuations in timing and sound intensity that musicians use to evoke emotional responses. A mechanical performance of the same piece served as a control. Before and after fMRI scanning, participants reported real-time emotional responses on a 2-dimensional rating scale (arousal and valence) as they listened to each performance. During fMRI scanning, participants listened without reporting emotional responses. Limbic and paralimbic brain areas responded to the expressive dynamics of human music performance, and both emotion and reward related activations during music listening were dependent upon musical training. Moreover, dynamic changes in timing predicted ratings of emotional arousal, as well as real-time changes in neural activity. BOLD signal changes correlated with expressive timing fluctuations in cortical and subcortical motor areas consistent with pulse perception, and in a network consistent with the human mirror neuron system. These findings show that expressive music performance evokes emotion and reward related neural activations, and that music's affective impact on the brains of listeners is altered by musical training. Our observations are consistent with the idea that music performance evokes an emotional response through a form of empathy that is based, at least in part, on the perception of movement and on violations of pulse-based temporal expectancies.
doi:10.1371/journal.pone.0013812
PMCID: PMC3002933  PMID: 21179549
14.  The acoustic and perceptual cues affecting melody segregation for listeners with a cochlear implant 
Our ability to listen selectively to single sound sources in complex auditory environments is termed “auditory stream segregation.”This ability is affected by peripheral disorders such as hearing loss, as well as plasticity in central processing such as occurs with musical training. Brain plasticity induced by musical training can enhance the ability to segregate sound, leading to improvements in a variety of auditory abilities. The melody segregation ability of 12 cochlear-implant recipients was tested using a new method to determine the perceptual distance needed to segregate a simple 4-note melody from a background of interleaved random-pitch distractor notes. In experiment 1, participants rated the difficulty of segregating the melody from distracter notes. Four physical properties of the distracter notes were changed. In experiment 2, listeners were asked to rate the dissimilarity between melody patterns whose notes differed on the four physical properties simultaneously. Multidimensional scaling analysis transformed the dissimilarity ratings into perceptual distances. Regression between physical and perceptual cues then derived the minimal perceptual distance needed to segregate the melody. The most efficient streaming cue for CI users was loudness. For the normal hearing listeners without musical backgrounds, a greater difference on the perceptual dimension correlated to the temporal envelope is needed for stream segregation in CI users. No differences in streaming efficiency were found between the perceptual dimensions linked to the F0 and the spectral envelope. Combined with our previous results in normally-hearing musicians and non-musicians, the results show that differences in training as well as differences in peripheral auditory processing (hearing impairment and the use of a hearing device) influences the way that listeners use different acoustic cues for segregating interleaved musical streams.
doi:10.3389/fpsyg.2013.00790
PMCID: PMC3818467  PMID: 24223563
auditory streaming; cochlear implant; music training; melody segregation; hearing impairment; pitch; loudness; timbre
15.  Melodic Contour Identification by Cochlear Implant Listeners 
Ear and hearing  2007;28(3):302-319.
Objective
While the cochlear implant provides many deaf patients with good speech understanding in quiet, music perception and appreciation with the cochlear implant remains a major challenge for most cochlear implant users. The present study investigated whether a closed-set melodic contour identification (MCI) task could be used to quantify cochlear implant users’ ability to recognize musical melodies and whether MCI performance could be improved with moderate auditory training. The present study also compared MCI performance with familiar melody identification (FMI) performance, with and without MCI training.
Methods
For the MCI task, test stimuli were melodic contours composed of 5 notes of equal duration whose frequencies corresponded to musical intervals. The interval between successive notes in each contour was varied between 1 and 5 semitones; the “root note” of the contours was also varied (A3, A4, and A5). Nine distinct musical patterns were generated for each interval and root note condition, resulting in a total of 135 musical contours. The identification of these melodic contours was measured in 11 cochlear implant users. FMI was also evaluated in the same subjects; recognition of 12 familiar melodies was tested with and without rhythm cues. MCI was also trained in 6 subjects, using custom software and melodic contours presented in a different frequency range from that used for testing.
Results
Results showed that MCI recognition performance was highly variable among cochlear implant users, ranging from 14% to 91% correct. For most subjects, MCI performance improved as the number of semitones between successive notes was increased; performance was slightly lower for the A3 root note condition. Mean FMI performance was 58% correct when rhythm cues were preserved and 29% correct when rhythm cues were removed. Statistical analyses revealed no significant correlation between MCI performance and FMI performance (with or without rhythmic cues). However, MCI performance was significantly correlated with vowel recognition performance; FMI performance was not correlated with cochlear implant subjects’ phoneme recognition performance. Preliminary results also showed that the MCI training improved all subjects’ MCI performance; the improved MCI performance also generalized to improved FMI performance.
Conclusions
Preliminary data indicate that the closed-set MCI task is a viable approach toward quantifying an important component of cochlear implant users’ music perception. The improvement in MCI performance and generalization to FMI performance with training suggests that MCI training may be useful for improving cochlear implant users’ music perception and appreciation; such training may be necessary to properly evaluate patient performance, as acute measures may underestimate the amount of musical information transmitted by the cochlear implant device and received by cochlear implant listeners.
doi:10.1097/01.aud.0000261689.35445.20
PMCID: PMC3627492  PMID: 17485980
16.  It's not what you play, it's how you play it: Timbre affects perception of emotion in music 
Salient sensory experiences often have a strong emotional tone, but the neuropsychological relations between perceptual characteristics of sensory objects and the affective information they convey remain poorly defined. Here we addressed the relationship between sound identity and emotional information using music. In two experiments, we investigated whether perception of emotions is influenced by altering the musical instrument on which the music is played, independently of other musical features. In the first experiment, 40 novel melodies each representing one of four emotions (happiness, sadness, fear, or anger) were each recorded on four different instruments (an electronic synthesizer, a piano, a violin, and a trumpet), controlling for melody, tempo, and loudness between instruments. Healthy participants (23 young adults aged 18–30 years, 24 older adults aged 58–75 years) were asked to select which emotion they thought each musical stimulus represented in a four-alternative forced-choice task. Using a generalized linear mixed model we found a significant interaction between instrument and emotion judgement with a similar pattern in young and older adults (p < .0001 for each age group). The effect was not attributable to musical expertise. In the second experiment using the same melodies and experimental design, the interaction between timbre and perceived emotion was replicated (p < .05) in another group of young adults for novel synthetic timbres designed to incorporate timbral cues to particular emotions. Our findings show that timbre (instrument identity) independently affects the perception of emotions in music after controlling for other acoustic, cognitive, and performance factors.
doi:10.1080/17470210902765957
PMCID: PMC2683716  PMID: 19391047
Timbre; Emotion; Music; Auditory object
17.  Temporal Stability of Music Perception and Appraisal Scores of Adult Cochlear Implant Recipients 
Background
An extensive body of literature indicates that cochlear implants are effective in supporting speech perception of persons with severe to profound hearing losses who do not benefit to any great extent from conventional hearing aids. Adult CI recipients tend to show significant improvement in speech perception within 3 months following implantation as a result of mere experience. Furthermore, CI recipients continue to show modest improvement as long as 5 years post implantation. In contrast, data taken from single testing protocols of music perception and appraisal indicate that CIs are less than ideal in transmitting important structural features of music, such as pitch, melody and timbre. However, there is presently little information documenting changes in music perception or appraisal over extended time as a result of mere experience.
Purpose
This study examined two basic questions: 1) Do adult CI recipients show significant improvement in perceptual acuity or appraisal of specific music listening tasks when tested in two consecutive years? 2) If there are tasks for which CI recipients show significant improvement with time, are there particular demographic variables that predict those CI recipients most likely to show improvement with extended CI use?
Research Design
A longitudinal cohort study. Implant recipients return annually for visits to the clinic.
Study Sample
The study included 209 adult cochlear implant recipients with at least 9 months implant experience before their first year measurement.
Data collection and analysis
Outcomes were measured on the patient’s annual visit in two consecutive years. Paired t-tests were used to test for significant improvement from one year to the next. Those variables demonstrating significant improvement were subjected to regression analyses performed to detect the demographic variables useful in predicting said improvement.
Results
There were no significant differences in music perception outcomes as a function of type of device or processing strategy used. Only familiar melody recognition (FMR) and recognition of melody excerpts with lyrics (MERT-L) showed significant improvement from one year to the next. After controlling for the baseline value, hearing aid use, months of use, music listening habits after implantation and formal musical training in elementary school were significant predictors of FMR improvement. Bilateral CI use, formal musical training in high school and beyond, and a measure of sequential cognitive processing were significant predictors of MERT-L improvement.
Conclusions
These adult CI recipients as a result of mere experience demonstrated fairly consistent music perception and appraisal on measures gathered in two consecutive years. Gains made tend to be modest, and can be associated with characteristics such as use of hearing aids, listening experiences, or bilateral use (in the case of lyrics). These results have implications for counseling of CI recipients with regard to realistic expectations and strategies for enhancing music perception and enjoyment.
PMCID: PMC2844251  PMID: 20085197
Cochlear implant; Music; Cognitive; Speech Perception
18.  A dynamically minimalist cognitive explanation of musical preference: is familiarity everything? 
This paper examines the idea that attraction to music is generated at a cognitive level through the formation and activation of networks of interlinked “nodes.” Although the networks involved are vast, the basic mechanism for activating the links is relatively simple. Two comprehensive cognitive-behavioral models of musical engagement are examined with the aim of identifying the underlying cognitive mechanisms and processes involved in musical experience. A “dynamical minimalism” approach (after Nowak, 2004) is applied to re-interpret musical engagement (listening, performing, composing, or imagining any of these) and to revise the latest version of the reciprocal-feedback model (RFM) of music processing. Specifically, a single cognitive mechanism of “spreading activation” through previously associated networks is proposed as a pleasurable outcome of musical engagement. This mechanism underlies the dynamic interaction of the various components of the RFM, and can thereby explain the generation of positive affects in the listener’s musical experience. This includes determinants of that experience stemming from the characteristics of the individual engaging in the musical activity (whether listener, composer, improviser, or performer), the situation and contexts (e.g., social factors), and the music (e.g., genre, structural features). The theory calls for new directions for future research, two being (1) further investigation of the components of the RFM to better understand musical experience and (2) more rigorous scrutiny of common findings about the salience of familiarity in musical experience and preference.
doi:10.3389/fpsyg.2014.00038
PMCID: PMC3915416  PMID: 24567723
musical experience; cognitive model; reciprocal-feedback model; spreading activation; mind-body; neural networks; preference; familiarity
19.  Neural Encoding of Speech and Music: Implications for Hearing Speech in Noise 
Seminars in hearing  2011;32(2):129-141.
Understanding speech in a background of competing noise is challenging, especially for individuals with hearing loss or deficits in auditory processing ability. The ability to hear in background noise cannot be predicted from the audiogram, an assessment of peripheral hearing ability; therefore, it is important to consider the impact of central and cognitive factors on speech-in-noise perception. Auditory processing in complex environments is reflected in neural encoding of pitch, timing, and timbre, the crucial elements of speech and music. Musical expertise in processing pitch, timing, and timbre may transfer to enhancements in speech-in-noise perception due to shared neural pathways for speech and music. Through cognitive-sensory interactions, musicians develop skills enabling them to selectively listen to relevant signals embedded in a network of melodies and harmonies, and this experience leads in turn to enhanced ability to focus on one voice in a background of other voices. Here we review recent work examining the biological mechanisms of speech and music perception and the potential for musical experience to ameliorate speech-in-noise listening difficulties.
doi:10.1055/s-0031-1277234
PMCID: PMC3989107  PMID: 24748717
Brain stem; music; speech in noise; timing; pitch
20.  Neural Substrates of Spontaneous Musical Performance: An fMRI Study of Jazz Improvisation 
PLoS ONE  2008;3(2):e1679.
To investigate the neural substrates that underlie spontaneous musical performance, we examined improvisation in professional jazz pianists using functional MRI. By employing two paradigms that differed widely in musical complexity, we found that improvisation (compared to production of over-learned musical sequences) was consistently characterized by a dissociated pattern of activity in the prefrontal cortex: extensive deactivation of dorsolateral prefrontal and lateral orbital regions with focal activation of the medial prefrontal (frontal polar) cortex. Such a pattern may reflect a combination of psychological processes required for spontaneous improvisation, in which internally motivated, stimulus-independent behaviors unfold in the absence of central processes that typically mediate self-monitoring and conscious volitional control of ongoing performance. Changes in prefrontal activity during improvisation were accompanied by widespread activation of neocortical sensorimotor areas (that mediate the organization and execution of musical performance) as well as deactivation of limbic structures (that regulate motivation and emotional tone). This distributed neural pattern may provide a cognitive context that enables the emergence of spontaneous creative activity.
doi:10.1371/journal.pone.0001679
PMCID: PMC2244806  PMID: 18301756
21.  Predictive uncertainty in auditory sequence processing 
Frontiers in Psychology  2014;5:1052.
Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.
doi:10.3389/fpsyg.2014.01052
PMCID: PMC4171990  PMID: 25295018
statistical learning; information theory; entropy; expectation; auditory cognition; music; melody
22.  Cerebral Activations Related to Audition-Driven Performance Imagery in Professional Musicians 
PLoS ONE  2014;9(4):e93681.
Functional Magnetic Resonance Imaging (fMRI) was used to study the activation of cerebral motor networks during auditory perception of music in professional keyboard musicians (n = 12). The activation paradigm implied that subjects listened to two-part polyphonic music, while either critically appraising the performance or imagining they were performing themselves. Two-part polyphonic audition and bimanual motor imagery circumvented a hemisphere bias associated with the convention of playing the melody with the right hand. Both tasks activated ventral premotor and auditory cortices, bilaterally, and the right anterior parietal cortex, when contrasted to 12 musically unskilled controls. Although left ventral premotor activation was increased during imagery (compared to judgment), bilateral dorsal premotor and right posterior-superior parietal activations were quite unique to motor imagery. The latter suggests that musicians not only recruited their manual motor repertoire but also performed a spatial transformation from the vertically perceived pitch axis (high and low sound) to the horizontal axis of the keyboard. Imagery-specific activations in controls were seen in left dorsal parietal-premotor and supplementary motor cortices. Although these activations were less strong compared to musicians, this overlapping distribution indicated the recruitment of a general ‘mirror-neuron’ circuitry. These two levels of sensori-motor transformations point towards common principles by which the brain organizes audition-driven music performance and visually guided task performance.
doi:10.1371/journal.pone.0093681
PMCID: PMC3979724  PMID: 24714661
23.  Neural Correlates of Lyrical Improvisation: An fMRI Study of Freestyle Rap 
Scientific Reports  2012;2:834.
The neural correlates of creativity are poorly understood. Freestyle rap provides a unique opportunity to study spontaneous lyrical improvisation, a multidimensional form of creativity at the interface of music and language. Here we use functional magnetic resonance imaging to characterize this process. Task contrast analyses indicate that improvised performance is characterized by dissociated activity in medial and dorsolateral prefrontal cortices, providing a context in which stimulus-independent behaviors may unfold in the absence of conscious monitoring and volitional control. Connectivity analyses reveal widespread improvisation-related correlations between medial prefrontal, cingulate motor, perisylvian cortices and amygdala, suggesting the emergence of a network linking motivation, language, affect and movement. Lyrical improvisation appears to be characterized by altered relationships between regions coupling intention and action, in which conventional executive control may be bypassed and motor control directed by cingulate motor mechanisms. These functional reorganizations may facilitate the initial improvisatory phase of creative behavior.
doi:10.1038/srep00834
PMCID: PMC3498928  PMID: 23155479
24.  The Role of Emotion in Musical Improvisation: An Analysis of Structural Features 
PLoS ONE  2014;9(8):e105144.
One of the primary functions of music is to convey emotion, yet how music accomplishes this task remains unclear. For example, simple correlations between mode (major vs. minor) and emotion (happy vs. sad) do not adequately explain the enormous range, subtlety or complexity of musically induced emotions. In this study, we examined the structural features of unconstrained musical improvisations generated by jazz pianists in response to emotional cues. We hypothesized that musicians would not utilize any universal rules to convey emotions, but would instead combine heterogeneous musical elements together in order to depict positive and negative emotions. Our findings demonstrate a lack of simple correspondence between emotions and musical features of spontaneous musical improvisation. While improvisations in response to positive emotional cues were more likely to be in major keys, have faster tempos, faster key press velocities and more staccato notes when compared to negative improvisations, there was a wide distribution for each emotion with components that directly violated these primary associations. The finding that musicians often combine disparate features together in order to convey emotion during improvisation suggests that structural diversity may be an essential feature of the ability of music to express a wide range of emotion.
doi:10.1371/journal.pone.0105144
PMCID: PMC4140734  PMID: 25144200
25.  Production and perception of legato, portato, and staccato articulation in saxophone playing 
This paper investigates the production and perception of different articulation techniques on the saxophone. In a production experiment, two melodies were recorded that required different effectors to play the tones (tongue-only actions, finger-only actions, combined tongue and finger actions) at three different tempi. A sensor saxophone reed was developed to monitor tongue-reed interactions during performance. In the slow tempo condition, combined tongue-finger actions showed improved timing, compared to the timing of the tongue alone. This observation supports the multiple timer hypothesis where the tongue's timekeeper benefits from a coupling to the timekeeper of the fingers. In the fast tempo condition, finger-only actions were less precise than tongue-only actions and timing precision of combined tongue-finger actions showed the higher timing variability, close to the level of finger-only actions. This suggests that the finger actions have a dominant influence on the overall timing of saxophone performance. In a listening experiment we investigated whether motor expertise in music performance influences the perception of articulation techniques in saxophone performance. Participants with different backgrounds in music making (saxophonists, musicians not playing the saxophone, and non-musicians) attended an AB-X listening test. They had to discriminate between saxophone phrases played with different articulation techniques (legato, portato, staccato). Participants across all three groups discriminated the sound of staccato articulation well from the sound of portato articulation and legato articulation. Errors occurred across all groups of listeners when legato articulation (no tonguing) and portato articulation (soft tonguing) had to be discriminated. Saxophonists' results were superior compared to the results of the other two groups, suggesting that expertise in saxophone playing facilitated the discrimination task.
doi:10.3389/fpsyg.2014.00690
PMCID: PMC4097958  PMID: 25076918
saxophone; articulation; music performance; timing; sensors; acoustics

Results 1-25 (579535)