The theory of embodied music cognition states that the perception and cognition of music is firmly, although not exclusively, linked to action patterns associated with that music. In this regard, the focus lies mostly on how music promotes certain action tendencies (i.e., dance, entrainment, etc.). Only recently, studies have started to devote attention to the reciprocal effects that people’s body movements may exert on how people perceive certain aspects of music and sound (e.g., pitch, meter, musical preference, etc.). The present study positions itself in this line of research. The central research question is whether expressive body movements, which are systematically paired with music, can modulate children’s perception of musical expressiveness. We present a behavioral experiment in which different groups of children (7–8 years, N = 46) either repetitively performed a happy or a sad choreography in response to expressively ambiguous music or merely listened to that music. The results of our study show indeed that children’s perception of musical expressiveness is modulated in accordance with the expressive character of the dance choreography performed to the music. This finding supports theories that claim a strong connection between action and perception, although further research is needed to uncover the details of this connection.
Entrainment has been studied in a variety of contexts including music perception, dance, verbal communication and motor coordination more generally. Here we seek to provide a unifying framework that incorporates the key aspects of entrainment as it has been studied in these varying domains. We propose that there are a number of types of entrainment that build upon pre-existing adaptations that allow organisms to perceive stimuli as rhythmic, to produce periodic stimuli, and to integrate the two using sensory feedback. We suggest that social entrainment is a special case of spatiotemporal coordination where the rhythmic signal originates from another individual. We use this framework to understand the function and evolutionary basis for coordinated rhythmic movement and to explore questions about the nature of entrainment in music and dance. The framework of entrainment presented here has a number of implications for the vocal learning hypothesis and other proposals for the evolution of coordinated rhythmic behavior across an array of species.
In all ages and countries, music and dance have constituted a central part in human culture and communication. Recently, vocal-learning animals such as parrots and elephants have been found to share rhythmic ability with humans. Thus, we investigated the rhythmic synchronization of budgerigars, a vocal-mimicking parrot species, under controlled conditions and a systematically designed experimental paradigm as a first step in understanding the evolution of musical entrainment. We trained eight budgerigars to perform isochronous tapping tasks in which they pecked a key to the rhythm of audio–visual metronome-like stimuli. The budgerigars showed evidence of entrainment to external stimuli over a wide range of tempos. They seemed to be inherently inclined to tap at fast tempos, which have a similar time scale to the rhythm of budgerigars' natural vocalizations. We suggest that vocal learning might have contributed to their performance, which resembled that of humans.
Music is a cross-cultural universal, a ubiquitous activity found in every known human culture. Individuals demonstrate manifestly different preferences in music, and yet relatively little is known about the underlying structure of those preferences. Here, we introduce a model of musical preferences based on listeners’ affective reactions to excerpts of music from a wide variety of musical genres. The findings from three independent studies converged to suggest that there exists a latent five-factor structure underlying music preferences that is genre-free, and reflects primarily emotional/affective responses to music. We have interpreted and labeled these factors as: 1) a Mellow factor comprising smooth and relaxing styles; 2) an Urban factor defined largely by rhythmic and percussive music, such as is found in rap, funk, and acid jazz; 3) a Sophisticated factor that includes classical, operatic, world, and jazz; 4) an Intense factor defined by loud, forceful, and energetic music; and 5) a Campestral factor comprising a variety of different styles of direct, and rootsy music such as is often found in country and singer-songwriter genres. The findings from a fourth study suggest that preferences for the MUSIC factors are affected by both the social and auditory characteristics of the music.
MUSIC; PREFERENCES; INDIVIDUAL DIFFERENCES; FACTOR ANALYSIS
Individuals with autism show impairments in emotional tuning, social interactions and communication. These are functions that have been attributed to the putative human mirror neuron system (MNS), which contains neurons that respond to the actions of self and others. It has been proposed that a dysfunction of that system underlies some of the characteristics of autism. Here, we review behavioral and imaging studies that implicate the MNS (or a brain network with similar functions) in sensory-motor integration and speech representation, and review data supporting the hypothesis that MNS activity could be abnormal in autism. In addition, we propose that an intervention designed to engage brain regions that overlap with the MNS may have significant clinical potential. We argue that this engagement could be achieved through forms of music making. Music making with others (e.g., playing instruments or singing) is a multi-modal activity that has been shown to engage brain regions that largely overlap with the human MNS. Furthermore, many children with autism thoroughly enjoy participating in musical activities. Such activities may enhance their ability to focus and interact with others, thereby fostering the development of communication and social skills. Thus, interventions incorporating methods of music making may offer a promising approach for facilitating expressive language in otherwise nonverbal children with autism.
Autism; Music; Language; Brain; Mirror neuron system; Auditory-motor mapping training
Musical emotions, such as happiness and sadness, have been investigated using instrumental music devoid of linguistic content. However, pop and rock, the most common musical genres, utilize lyrics for conveying emotions. Using participants’ self-selected musical excerpts, we studied their behavior and brain responses to elucidate how lyrics interact with musical emotion processing, as reflected by emotion recognition and activation of limbic areas involved in affective experience. We extracted samples from subjects’ selections of sad and happy pieces and sorted them according to the presence of lyrics. Acoustic feature analysis showed that music with lyrics differed from music without lyrics in spectral centroid, a feature related to perceptual brightness, whereas sad music with lyrics did not diverge from happy music without lyrics, indicating the role of other factors in emotion classification. Behavioral ratings revealed that happy music without lyrics induced stronger positive emotions than happy music with lyrics. We also acquired functional magnetic resonance imaging data while subjects performed affective tasks regarding the music. First, using ecological and acoustically variable stimuli, we broadened previous findings about the brain processing of musical emotions and of songs versus instrumental music. Additionally, contrasts between sad music with versus without lyrics recruited the parahippocampal gyrus, the amygdala, the claustrum, the putamen, the precentral gyrus, the medial and inferior frontal gyri (including Broca’s area), and the auditory cortex, while the reverse contrast produced no activations. Happy music without lyrics activated structures of the limbic system and the right pars opercularis of the inferior frontal gyrus, whereas auditory regions alone responded to happy music with lyrics. These findings point to the role of acoustic cues for the experience of happiness in music and to the importance of lyrics for sad musical emotions.
music; emotion; fMRI; limbic system; language; acoustic feature
Previous research has shown that the matching of rhythmic behaviour between individuals (synchrony) increases cooperation. Such synchrony is most noticeable in music, dance and collective rituals. As well as the matching of behaviour, such collective performances typically involve shared intentionality: performers actively collaborate to produce joint actions. Over three experiments we examined the importance of shared intentionality in promoting cooperation from group synchrony. Experiment 1 compared a condition in which group synchrony was produced through shared intentionality to conditions in which synchrony or asynchrony were created as a by-product of hearing the same or different rhythmic beats. We found that synchrony combined with shared intentionality produced the greatest level of cooperation. To examinef the importance of synchrony when shared intentionality is present, Experiment 2 compared a condition in which participants deliberately worked together to produce synchrony with a condition in which participants deliberately worked together to produce asynchrony. We found that synchrony combined with shared intentionality produced the greatest level of cooperation. Experiment 3 manipulated both the presence of synchrony and shared intentionality and found significantly greater cooperation with synchrony and shared intentionality combined. Path analysis supported a reinforcement of cooperation model according to which perceiving synchrony when there is a shared goal to produce synchrony provides immediate feedback for successful cooperation so reinforcing the group’s cooperative tendencies. The reinforcement of cooperation model helps to explain the evolutionary conservation of traditional music and dance performances, and furthermore suggests that the collectivist values of such cultures may be an essential part of the mechanisms by which synchrony galvanises cooperative behaviours.
The influence of formal musical training on auditory cognition has been well established. For the majority of children, however, musical experience does not primarily consist of adult-guided training on a musical instrument. Instead, young children mostly engage in everyday musical activities such as singing and musical play. Here, we review recent electrophysiological and behavioral studies carried out in our laboratory and elsewhere which have begun to map how developing auditory skills are shaped by such informal musical activities both at home and in playschool-type settings. Although more research is still needed, the evidence emerging from these studies suggests that, in addition to formal musical training, informal musical activities can also influence the maturation of auditory discrimination and attention in preschool-aged children.
music; brain development; event-related potential; training; auditory perception; informal musical activities
As the main interhemispheric fiber tract, the corpus callosum (CC) is of particular importance for musicians who simultaneously engage parts of both hemispheres to process and play music. Professional musicians who began music training before the age of 7 years have larger anterior CC areas than do nonmusicians, which suggests that plasticity due to music training may occur in the CC during early childhood. However, no study has yet demonstrated that the increased CC area found in musicians is due to music training rather than to preexisting differences. We tested the hypothesis that approximately 29 months of instrumental music training would cause a significant increase in the size of particular subareas of the CC known to have fibers that connect motor-related areas of both hemispheres. On the basis of total weekly practice time, a sample of 31 children aged 5–7 was divided into three groups: high-practicing, low-practicing, and controls. No CC size differences were seen at base line, but differences emerged after an average of 29 months of observation in the high-practicing group in the anterior midbody of the CC (which connects premotor and supplementary motor areas of the two hemispheres). Total weekly music exposure predicted degree of change in this subregion of the CC as well as improvement on a motor-sequencing task. Our results show that it is intense musical experience/practice, not preexisting differences, that is responsible for the larger anterior CC area found in professional adult musicians.
music; brain; corpus callosum; children; plasticity
Environmental prevention strategies in club settings where music and dance events are featured could provide an important new arena for the prevention of drug use and other risky behaviors (e.g., sexual risk taking, intoxication and drug use, aggression, and driving under the influence). Electronic music dance events (EMDEs) occur in clubs that attract young, emerging adults (18–25 years of age) and attract individuals who engage in various types of drug use. Borrowing from the environmental prevention studies that focus on reducing alcohol use and related problems, a model for drug prevention in the club setting is proposed. Initially, an overview of the relationships between EMDEs and drug use and other risky behaviors are presented. Next, rationales for environmental strategies are provided. Finally, an environmental approach to prevention of drug use and risky behaviors in clubs is described. This comprehensive set of environmental strategies, is designed to be mutually supportive and interactive. Environmental strategies are believed to provide potential for developing an efficacious prevention strategy. The environmental prevention approach presented here is composed of three intervention domains: (1) Mobilization, (2) Strategies for the Exterior Environment, and (3) Strategies for the Interior Environment.
drugs; prevention; clubs
Expert ensemble musicians produce exquisitely coordinated sounds, but rehearsal is typically required to do so. Ensemble coordination may thus be influenced by the degree to which individuals are familiar with each other's parts. Such familiarity may affect the ability to predict and synchronize with co-performers' actions. Internal models related to action simulation and anticipatory musical imagery may be affected by knowledge of (1) the musical structure of a co-performer's part (e.g., in terms of its rhythm and phrase structure) and/or (2) the co-performer's idiosyncratic playing style (e.g., expressive micro-timing variations). The current study investigated the effects of familiarity on interpersonal coordination in piano duos. Skilled pianists were required to play several duets with different partners. One condition included duets for which co-performers had previously practiced both parts, while another condition included duets for which each performer had practiced only their own part. Each piece was recorded six times without joint rehearsal or visual contact to examine the effects of increasing familiarity. Interpersonal coordination was quantified by measuring asynchronies between pianists' keystroke timing and the correlation of their body (head and torso) movements, which were recorded with a motion capture system. The results suggest that familiarity with a co-performer's part, in the absence of familiarity with their playing style, engenders predictions about micro-timing variations that are based instead upon one's own playing style, leading to a mismatch between predictions and actual events at short timescales. Predictions at longer timescales—that is, those related to musical measures and phrases, and reflected in head movements and body sway—are, however, facilitated by familiarity with the structure of a co-performer's part. These findings point to a dissociation between interpersonal coordination at the level of keystrokes and body movements.
interpersonal coordination; body movement; music; ensembles; sensorimotor synchronization
This paper proposes a method to translate human EEG into music, so as to represent mental state by music. The arousal levels of the brain mental state and music emotion are implicitly used as the bridge between the mind world and the music. The arousal level of the brain is based on the EEG features extracted mainly by wavelet analysis, and the music arousal level is related to the musical parameters such as pitch, tempo, rhythm, and tonality. While composing, some music principles (harmonics and structure) were taken into consideration. With EEGs during various sleep stages as an example, the music generated from them had different patterns of pitch, rhythm, and tonality. 35 volunteers listened to the music pieces, and significant difference in music arousal levels was found. It implied that different mental states may be identified by the corresponding music, and so the music from EEG may be a potential tool for EEG monitoring, biofeedback therapy, and so forth.
Are there bi-directional influences between speech perception and music perception? An answer to this question is essential for understanding the extent to which the speech and music that we hear are processed by domain-general auditory processes and/or by distinct neural auditory mechanisms. This review summarizes a large body of behavioral and neuroscientific findings which suggest that the musical experience of trained musicians does modulate speech processing, and a sparser set of data, largely on pitch processing, which suggest in addition that linguistic experience, in particular learning a tone language, modulates music processing. Although research has focused mostly on music on speech effects, we argue that both directions of influence need to be studied, and conclude that the picture which thus emerges is one of mutual interaction across domains. In particular, it is not simply that experience with spoken language has some effects on music perception, and vice versa, but that because of shared domain-general subcortical and cortical networks, experiences in both domains influence behavior in both domains.
speech; language; music; auditory processing; domain-general processes; interaction; transfer effects; brain and behavior
Nowadays, many people use portable players to enrich their daily life with enjoyable music. However, in noisy environments, the player volume is often set to extremely high levels in order to drown out the intense ambient noise and satisfy the appetite for music. Extensive and inappropriate usage of portable music players might cause subtle damages in the auditory system, which are not behaviorally detectable in an early stage of the hearing impairment progress. Here, by means of magnetoencephalography, we objectively examined detrimental effects of portable music player misusage on the population-level frequency tuning in the human auditory cortex. We compared two groups of young people: one group had listened to music with portable music players intensively for a long period of time, while the other group had not. Both groups performed equally and normally in standard audiological examinations (pure tone audiogram, speech test, and hearing-in-noise test). However, the objective magnetoencephalographic data demonstrated that the population-level frequency tuning in the auditory cortex of the portable music player users was significantly broadened compared to the non-users, when attention was distracted from the auditory modality; this group difference vanished when attention was directed to the auditory modality. Our conclusion is that extensive and inadequate usage of portable music players could cause subtle damages, which standard behavioral audiometric measures fail to detect in an early stage. However, these damages could lead to future irreversible hearing disorders, which would have a huge negative impact on the quality of life of those affected, and the society as a whole.
Background and Objectives
There is an emerging evidence base for the use of music therapy in the treatment of severe mental illness. Whilst different models of music therapy have been developed in mental health care, none have specifically accounted for the features and context of acute in-patient settings. This review aimed to identify how music therapy is provided for acute adult psychiatric in-patients and what outcomes have been reported.
A systematic review using medical, psychological and music therapy databases. Papers describing music therapy with acute adult psychiatric in-patients were included. Analysis utilised narrative synthesis.
98 papers were identified, of which 35 reported research findings. Open group work and active music making for nonverbal expression alongside verbal reflection was emphasised. Aims were engagement, communication and interpersonal relationships focusing upon immediate areas of need rather than longer term insight. The short stay, patient diversity and institutional structure influenced delivery and resulted in a focus on single sessions, high session frequency, more therapist direction, flexible use of musical activities, predictable musical structures, and clear realistic goals. Outcome studies suggested effectiveness in addressing a range of symptoms, but were limited by methodological shortcomings and small sample sizes. Studies with significant positive effects all used active musical participation with a degree of structure and were delivered in four or more sessions.
No single clearly defined model exists for music therapy with adults in acute psychiatric in-patient settings, and described models are not conclusive. Greater frequency of therapy, active structured music making with verbal discussion, consistency of contact and boundaries, an emphasis on building a therapeutic relationship and building patient resources may be of particular importance. Further research is required to develop specific music therapy models for this patient group that can be tested in experimental studies.
Knowledge in microbiology is reaching an extreme level of diversification and complexity, which paradoxically results in a strong reduction in the intelligibility of microbial life. In our days, the “score of life” metaphor is more accurate to express the complexity of living systems than the classic “book of life.” Music and life can be represented at lower hierarchical levels by music scores and genomic sequences, and such representations have a generational influence in the reproduction of music and life. If music can be considered as a representation of life, such representation remains as unthinkable as life itself. The analysis of scores and genomic sequences might provide mechanistic, phylogenetic, and evolutionary insights into music and life, but not about their real dynamics and nature, which is still maintained unthinkable, as was proposed by Wittgenstein. As complex systems, life or music is composed by thinkable and only showable parts, and a strategy of half-thinking, half-seeing is needed to expand knowledge. Complex models for complex systems, based on experiences on trans-hierarchical integrations, should be developed in order to provide a mixture of legibility and imageability of biological processes, which should lead to higher levels of intelligibility of microbial life.
intelligibility; complex systems; Wittgenstein; metaphors; epistemology
The possible transfer of musical expertise to the acquisition of syntactical structures in first and second language has emerged recently as an intriguing topic in the research of cognitive processes. However, it is unlikely that the benefits of musical training extend equally to the acquisition of all syntactical structures. As cognitive transfer presumably requires overlapping processing components and brain regions involved in these processing components, one can surmise that transfer between musical ability and syntax acquisition would be limited to structural elements that are shared between the two. We propose that musical expertise transfers only to the processing of recursive long-distance dependencies inherent in hierarchical syntactic structures. In this study, we taught fifty-six participants with widely varying degrees of musical expertise the artificial language BROCANTO, which allows the direct comparison of long-distance and local dependencies. We found that the quantity of musical training (measured in accumulated hours of practice and instruction) explained unique variance in performance in the long-distance dependency condition only. These data suggest that musical training facilitates the acquisition specifically of hierarchical syntactic structures.
syntax acquisition; hierarchical syntax; L2 learning; musical training; transfer effects
Overuse (injury) syndrome, common in musicians, is characterised by persisting pain and tenderness in the muscles and joint ligaments of the upper limb due to excessive use and in more advanced instances by weakness and loss of response and control in the affected muscle groups. This occurs typically in tertiary music students when their practice load is raised. In seven Australian performing music schools the minimum prevalence of the condition was found to be 9.3%. In two music schools where the study was more controlled the incidences were 13% and 21%. The factors leading to overuse (injury) syndrome may be identified as follows: the genetic factor of vulnerability which cannot be altered; the student's technique which may be influenced by teaching and application so that it is more "energy efficient"; and the time X intensity of practice which is totally within the student's control. Prevention involves education of staff and students about the overuse process, rationalisation of practice habits and repertoire, abolition or reduction of static loading of the weight of the instruments, and earlier reporting when the problem is most easily corrected. Psychological problems arising in this syndrome appeared to occur as a reaction to the condition rather than as a causal factor.
Music has long been associated with trance states, but very little has been written about the modern western discussion of music as a form of hypnosis or ‘brainwashing’. However, from Mesmer's use of the glass armonica to the supposed dangers of subliminal messages in heavy metal, the idea that music can overwhelm listeners' self-control has been a recurrent theme. In particular, the concepts of automatic response and conditioned reflex have been the basis for a model of physiological psychology in which the self has been depicted as vulnerable to external stimuli such as music. This article will examine the discourse of hypnotic music from animal magnetism and the experimental hypnosis of the nineteenth century to the brainwashing panics since the Cold War, looking at the relationship between concerns about hypnotic music and the politics of the self and sexuality.
Music; hypnosis; Charcot; brainwashing; mesmerism
The Musical Emotional Bursts (MEB) consist of 80 brief musical executions expressing basic emotional states (happiness, sadness and fear) and neutrality. These musical bursts were designed to be the musical analog of the Montreal Affective Voices (MAV)—a set of brief non-verbal affective vocalizations portraying different basic emotions. The MEB consist of short (mean duration: 1.6 s) improvisations on a given emotion or of imitations of a given MAV stimulus, played on a violin (10 stimuli × 4 [3 emotions + neutral]), or a clarinet (10 stimuli × 4 [3 emotions + neutral]). The MEB arguably represent a primitive form of music emotional expression, just like the MAV represent a primitive form of vocal, non-linguistic emotional expression. To create the MEB, stimuli were recorded from 10 violinists and 10 clarinetists, and then evaluated by 60 participants. Participants evaluated 240 stimuli [30 stimuli × 4 (3 emotions + neutral) × 2 instruments] by performing either a forced-choice emotion categorization task, a valence rating task or an arousal rating task (20 subjects per task); 40 MAVs were also used in the same session with similar task instructions. Recognition accuracy of emotional categories expressed by the MEB (n:80) was lower than for the MAVs but still very high with an average percent correct recognition score of 80.4%. Highest recognition accuracies were obtained for happy clarinet (92.0%) and fearful or sad violin (88.0% each) MEB stimuli. The MEB can be used to compare the cerebral processing of emotional expressions in music and vocal communication, or used for testing affective perception in patients with communication problems.
music; emotion; auditory stimuli; voices
Congenital amusia is a neurodevelopmental disorder that affects about 3% of the adult population. Adults experiencing this musical disorder in the absence of macroscopically visible brain injury are described as cases of congenital amusia under the assumption that the musical deficits have been present from birth. Here, we show that this disorder can be expressed in the developing brain. We found that (10–13 year-old) children exhibit a marked deficit in the detection of fine-grained pitch differences in both musical and acoustical context in comparison to their normally developing peers comparable in age and general intelligence. This behavioral deficit could be traced down to their abnormal P300 brain responses to the detection of subtle pitch changes. The altered pattern of electrical activity does not seem to arise from an anomalous functioning of the auditory cortex, because all early components of the brain potentials, the N100, the MMN, and the P200 appear normal. Rather, the brain and behavioral measures point to disrupted information propagation from the auditory cortex to other cortical regions. Furthermore, the behavioral and neural manifestations of the disorder remained unchanged after 4 weeks of daily musical listening. These results show that congenital amusia can be detected in childhood despite regular musical exposure and normal intellectual functioning.
Background: Individuals with dementia often experience poor quality of life (QOL) due to behavioral and psychological symptoms of dementia (BPSD). Music therapy can reduce BPSD, but most studies have focused on patients with mild to moderate dementia. We hypothesized that music intervention would have beneficial effects compared with a no-music control condition, and that interactive music intervention would have stronger effects than passive music intervention.
Methods: Thirty-nine individuals with severe Alzheimer's disease were randomly and blindly assigned to two music intervention groups (passive or interactive) and a no-music Control group. Music intervention involved individualized music. Short-term effects were evaluated via emotional response and stress levels measured with the autonomic nerve index and the Faces Scale. Long-term effects were evaluated by BPSD changes using the Behavioral Pathology in Alzheimer's Disease (BEHAVE-AD) Rating Scale.
Results: Passive and interactive music interventions caused short-term parasympathetic dominance. Interactive intervention caused the greatest improvement in emotional state. Greater long-term reduction in BPSD was observed following interactive intervention, compared with passive music intervention and a no-music control condition.
Conclusion: Music intervention can reduce stress in individuals with severe dementia, with interactive interventions exhibiting the strongest beneficial effects. Since interactive music intervention can restore residual cognitive and emotional function, this approach may be useful for aiding severe dementia patients’ relationships with others and improving QOL. The registration number of the trial and the name of the trial registry are UMIN000008801 and “Examination of Effective Nursing Intervention for Music Therapy for Severe Dementia Elderly Person” respectively.
Alzheimer's disease (AD); behavioral and psychological symptoms of dementia (BPSD); interactive music intervention; passive music intervention; the autonomic nerve index; the Behavioral Pathology in Alzheimer's Disease (BEHAVE-AD) Rating Scale; residual function; cognitive reserve
To address the problem of mixing in EEG or MEG connectivity analysis we exploit that noninteracting brain sources do not contribute systematically to the imaginary part of the cross-spectrum. Firstly, we propose to apply the existing subspace method “RAP-MUSIC” to the subspace found from the dominant singular vectors of the imaginary part of the cross-spectrum rather than to the conventionally used covariance matrix. Secondly, to estimate the specific sources interacting with each other, we use a modified LCMV-beamformer approach in which the source direction for each voxel was determined by maximizing the imaginary coherence with respect to a given reference. These two methods are applicable in this form only if the number of interacting sources is even, because odd-dimensional subspaces collapse to even-dimensional ones. Simulations show that (a) RAP-MUSIC based on the imaginary part of the cross-spectrum accurately finds the correct source locations, that (b) conventional RAP-MUSIC fails to do so since it is highly influenced by noninteracting sources, and that (c) the second method correctly identifies those sources which are interacting with the reference. The methods are also applied to real data for a motor paradigm, resulting in the localization of four interacting sources presumably in sensory-motor areas.
In the present study a music therapeutic intervention according to the ‘Heidelberg Model’ was evaluated as a complementary treatment option for patients with acute tinnitus whom medical treatment only brought minimal or no improvement. The central question was if music therapy in an early phase of tinnitus was able to reduce tinnitus symptoms and to prevent them from becoming chronical. 23 patients with acute tinnitus (6-12 weeks) were included in this study and took part in our manualized short term music therapeutic treatment which lasted ten consecutive 50-minutes sessions of individualized therapy. Tinnitus severity and individual tinnitus related distress were assessed by the Tinnitus Beeinträchtigungs-Fragebogen (i.e. Tinnitus Impairment Questionnaire, TBF-12) at baseline, start of treatment, and end of treatment. Score changes in TBF-12 from start to end of the treatment showed significant improvements in tinnitus impairment. This indicates that this music therapy approach applied in an initial stage of tinnitus can make an important contribution towards preventing tinnitus from becoming a chronic condition.
Music therapy; early intervention; acute tinnitus; recent-onset tinnitus; chronification
Understanding emotions is fundamental to our ability to navigate and thrive in a complex
world of human social interaction. Individuals with Autism Spectrum Disorders (ASD) are known to
experience difficulties with the communication and understanding of emotion, such as the nonverbal
expression of emotion and the interpretation of emotions of others from facial expressions and body
language. These deficits often lead to loneliness and isolation from peers, and social withdrawal from
the environment in general. In the case of music however, there is evidence to suggest that individuals
with ASD do not have difficulties recognizing simple emotions. In addition, individuals with ASD have
been found to show normal and even superior abilities with specific aspects of music processing, and
often show strong preferences towards music. It is possible these varying abilities with different types of
expressive communication may be related to a neural system referred to as the mirror neuron system
(MNS), which has been proposed as deficient in individuals with autism. Music’s power to stimulate
emotions and intensify our social experiences might activate the MNS in individuals with ASD, and thus
provide a neural foundation for music as an effective therapeutic tool. In this review, we present
literature on the ontogeny of emotion processing in typical development and in individuals with ASD,
with a focus on the case of music.
limbic system; mirror neurons; music; autism; social interaction