The theory of embodied music cognition states that the perception and cognition of music is firmly, although not exclusively, linked to action patterns associated with that music. In this regard, the focus lies mostly on how music promotes certain action tendencies (i.e., dance, entrainment, etc.). Only recently, studies have started to devote attention to the reciprocal effects that people’s body movements may exert on how people perceive certain aspects of music and sound (e.g., pitch, meter, musical preference, etc.). The present study positions itself in this line of research. The central research question is whether expressive body movements, which are systematically paired with music, can modulate children’s perception of musical expressiveness. We present a behavioral experiment in which different groups of children (7–8 years, N = 46) either repetitively performed a happy or a sad choreography in response to expressively ambiguous music or merely listened to that music. The results of our study show indeed that children’s perception of musical expressiveness is modulated in accordance with the expressive character of the dance choreography performed to the music. This finding supports theories that claim a strong connection between action and perception, although further research is needed to uncover the details of this connection.
Entrainment has been studied in a variety of contexts including music perception, dance, verbal communication and motor coordination more generally. Here we seek to provide a unifying framework that incorporates the key aspects of entrainment as it has been studied in these varying domains. We propose that there are a number of types of entrainment that build upon pre-existing adaptations that allow organisms to perceive stimuli as rhythmic, to produce periodic stimuli, and to integrate the two using sensory feedback. We suggest that social entrainment is a special case of spatiotemporal coordination where the rhythmic signal originates from another individual. We use this framework to understand the function and evolutionary basis for coordinated rhythmic movement and to explore questions about the nature of entrainment in music and dance. The framework of entrainment presented here has a number of implications for the vocal learning hypothesis and other proposals for the evolution of coordinated rhythmic behavior across an array of species.
In all ages and countries, music and dance have constituted a central part in human culture and communication. Recently, vocal-learning animals such as parrots and elephants have been found to share rhythmic ability with humans. Thus, we investigated the rhythmic synchronization of budgerigars, a vocal-mimicking parrot species, under controlled conditions and a systematically designed experimental paradigm as a first step in understanding the evolution of musical entrainment. We trained eight budgerigars to perform isochronous tapping tasks in which they pecked a key to the rhythm of audio–visual metronome-like stimuli. The budgerigars showed evidence of entrainment to external stimuli over a wide range of tempos. They seemed to be inherently inclined to tap at fast tempos, which have a similar time scale to the rhythm of budgerigars' natural vocalizations. We suggest that vocal learning might have contributed to their performance, which resembled that of humans.
Music is a cross-cultural universal, a ubiquitous activity found in every known human culture. Individuals demonstrate manifestly different preferences in music, and yet relatively little is known about the underlying structure of those preferences. Here, we introduce a model of musical preferences based on listeners’ affective reactions to excerpts of music from a wide variety of musical genres. The findings from three independent studies converged to suggest that there exists a latent five-factor structure underlying music preferences that is genre-free, and reflects primarily emotional/affective responses to music. We have interpreted and labeled these factors as: 1) a Mellow factor comprising smooth and relaxing styles; 2) an Urban factor defined largely by rhythmic and percussive music, such as is found in rap, funk, and acid jazz; 3) a Sophisticated factor that includes classical, operatic, world, and jazz; 4) an Intense factor defined by loud, forceful, and energetic music; and 5) a Campestral factor comprising a variety of different styles of direct, and rootsy music such as is often found in country and singer-songwriter genres. The findings from a fourth study suggest that preferences for the MUSIC factors are affected by both the social and auditory characteristics of the music.
MUSIC; PREFERENCES; INDIVIDUAL DIFFERENCES; FACTOR ANALYSIS
Individuals with autism show impairments in emotional tuning, social interactions and communication. These are functions that have been attributed to the putative human mirror neuron system (MNS), which contains neurons that respond to the actions of self and others. It has been proposed that a dysfunction of that system underlies some of the characteristics of autism. Here, we review behavioral and imaging studies that implicate the MNS (or a brain network with similar functions) in sensory-motor integration and speech representation, and review data supporting the hypothesis that MNS activity could be abnormal in autism. In addition, we propose that an intervention designed to engage brain regions that overlap with the MNS may have significant clinical potential. We argue that this engagement could be achieved through forms of music making. Music making with others (e.g., playing instruments or singing) is a multi-modal activity that has been shown to engage brain regions that largely overlap with the human MNS. Furthermore, many children with autism thoroughly enjoy participating in musical activities. Such activities may enhance their ability to focus and interact with others, thereby fostering the development of communication and social skills. Thus, interventions incorporating methods of music making may offer a promising approach for facilitating expressive language in otherwise nonverbal children with autism.
Autism; Music; Language; Brain; Mirror neuron system; Auditory-motor mapping training
Musical emotions, such as happiness and sadness, have been investigated using instrumental music devoid of linguistic content. However, pop and rock, the most common musical genres, utilize lyrics for conveying emotions. Using participants’ self-selected musical excerpts, we studied their behavior and brain responses to elucidate how lyrics interact with musical emotion processing, as reflected by emotion recognition and activation of limbic areas involved in affective experience. We extracted samples from subjects’ selections of sad and happy pieces and sorted them according to the presence of lyrics. Acoustic feature analysis showed that music with lyrics differed from music without lyrics in spectral centroid, a feature related to perceptual brightness, whereas sad music with lyrics did not diverge from happy music without lyrics, indicating the role of other factors in emotion classification. Behavioral ratings revealed that happy music without lyrics induced stronger positive emotions than happy music with lyrics. We also acquired functional magnetic resonance imaging data while subjects performed affective tasks regarding the music. First, using ecological and acoustically variable stimuli, we broadened previous findings about the brain processing of musical emotions and of songs versus instrumental music. Additionally, contrasts between sad music with versus without lyrics recruited the parahippocampal gyrus, the amygdala, the claustrum, the putamen, the precentral gyrus, the medial and inferior frontal gyri (including Broca’s area), and the auditory cortex, while the reverse contrast produced no activations. Happy music without lyrics activated structures of the limbic system and the right pars opercularis of the inferior frontal gyrus, whereas auditory regions alone responded to happy music with lyrics. These findings point to the role of acoustic cues for the experience of happiness in music and to the importance of lyrics for sad musical emotions.
music; emotion; fMRI; limbic system; language; acoustic feature
As the main interhemispheric fiber tract, the corpus callosum (CC) is of particular importance for musicians who simultaneously engage parts of both hemispheres to process and play music. Professional musicians who began music training before the age of 7 years have larger anterior CC areas than do nonmusicians, which suggests that plasticity due to music training may occur in the CC during early childhood. However, no study has yet demonstrated that the increased CC area found in musicians is due to music training rather than to preexisting differences. We tested the hypothesis that approximately 29 months of instrumental music training would cause a significant increase in the size of particular subareas of the CC known to have fibers that connect motor-related areas of both hemispheres. On the basis of total weekly practice time, a sample of 31 children aged 5–7 was divided into three groups: high-practicing, low-practicing, and controls. No CC size differences were seen at base line, but differences emerged after an average of 29 months of observation in the high-practicing group in the anterior midbody of the CC (which connects premotor and supplementary motor areas of the two hemispheres). Total weekly music exposure predicted degree of change in this subregion of the CC as well as improvement on a motor-sequencing task. Our results show that it is intense musical experience/practice, not preexisting differences, that is responsible for the larger anterior CC area found in professional adult musicians.
music; brain; corpus callosum; children; plasticity
Environmental prevention strategies in club settings where music and dance events are featured could provide an important new arena for the prevention of drug use and other risky behaviors (e.g., sexual risk taking, intoxication and drug use, aggression, and driving under the influence). Electronic music dance events (EMDEs) occur in clubs that attract young, emerging adults (18–25 years of age) and attract individuals who engage in various types of drug use. Borrowing from the environmental prevention studies that focus on reducing alcohol use and related problems, a model for drug prevention in the club setting is proposed. Initially, an overview of the relationships between EMDEs and drug use and other risky behaviors are presented. Next, rationales for environmental strategies are provided. Finally, an environmental approach to prevention of drug use and risky behaviors in clubs is described. This comprehensive set of environmental strategies, is designed to be mutually supportive and interactive. Environmental strategies are believed to provide potential for developing an efficacious prevention strategy. The environmental prevention approach presented here is composed of three intervention domains: (1) Mobilization, (2) Strategies for the Exterior Environment, and (3) Strategies for the Interior Environment.
drugs; prevention; clubs
This paper proposes a method to translate human EEG into music, so as to represent mental state by music. The arousal levels of the brain mental state and music emotion are implicitly used as the bridge between the mind world and the music. The arousal level of the brain is based on the EEG features extracted mainly by wavelet analysis, and the music arousal level is related to the musical parameters such as pitch, tempo, rhythm, and tonality. While composing, some music principles (harmonics and structure) were taken into consideration. With EEGs during various sleep stages as an example, the music generated from them had different patterns of pitch, rhythm, and tonality. 35 volunteers listened to the music pieces, and significant difference in music arousal levels was found. It implied that different mental states may be identified by the corresponding music, and so the music from EEG may be a potential tool for EEG monitoring, biofeedback therapy, and so forth.
Nowadays, many people use portable players to enrich their daily life with enjoyable music. However, in noisy environments, the player volume is often set to extremely high levels in order to drown out the intense ambient noise and satisfy the appetite for music. Extensive and inappropriate usage of portable music players might cause subtle damages in the auditory system, which are not behaviorally detectable in an early stage of the hearing impairment progress. Here, by means of magnetoencephalography, we objectively examined detrimental effects of portable music player misusage on the population-level frequency tuning in the human auditory cortex. We compared two groups of young people: one group had listened to music with portable music players intensively for a long period of time, while the other group had not. Both groups performed equally and normally in standard audiological examinations (pure tone audiogram, speech test, and hearing-in-noise test). However, the objective magnetoencephalographic data demonstrated that the population-level frequency tuning in the auditory cortex of the portable music player users was significantly broadened compared to the non-users, when attention was distracted from the auditory modality; this group difference vanished when attention was directed to the auditory modality. Our conclusion is that extensive and inadequate usage of portable music players could cause subtle damages, which standard behavioral audiometric measures fail to detect in an early stage. However, these damages could lead to future irreversible hearing disorders, which would have a huge negative impact on the quality of life of those affected, and the society as a whole.
Knowledge in microbiology is reaching an extreme level of diversification and complexity, which paradoxically results in a strong reduction in the intelligibility of microbial life. In our days, the “score of life” metaphor is more accurate to express the complexity of living systems than the classic “book of life.” Music and life can be represented at lower hierarchical levels by music scores and genomic sequences, and such representations have a generational influence in the reproduction of music and life. If music can be considered as a representation of life, such representation remains as unthinkable as life itself. The analysis of scores and genomic sequences might provide mechanistic, phylogenetic, and evolutionary insights into music and life, but not about their real dynamics and nature, which is still maintained unthinkable, as was proposed by Wittgenstein. As complex systems, life or music is composed by thinkable and only showable parts, and a strategy of half-thinking, half-seeing is needed to expand knowledge. Complex models for complex systems, based on experiences on trans-hierarchical integrations, should be developed in order to provide a mixture of legibility and imageability of biological processes, which should lead to higher levels of intelligibility of microbial life.
intelligibility; complex systems; Wittgenstein; metaphors; epistemology
The possible transfer of musical expertise to the acquisition of syntactical structures in first and second language has emerged recently as an intriguing topic in the research of cognitive processes. However, it is unlikely that the benefits of musical training extend equally to the acquisition of all syntactical structures. As cognitive transfer presumably requires overlapping processing components and brain regions involved in these processing components, one can surmise that transfer between musical ability and syntax acquisition would be limited to structural elements that are shared between the two. We propose that musical expertise transfers only to the processing of recursive long-distance dependencies inherent in hierarchical syntactic structures. In this study, we taught fifty-six participants with widely varying degrees of musical expertise the artificial language BROCANTO, which allows the direct comparison of long-distance and local dependencies. We found that the quantity of musical training (measured in accumulated hours of practice and instruction) explained unique variance in performance in the long-distance dependency condition only. These data suggest that musical training facilitates the acquisition specifically of hierarchical syntactic structures.
syntax acquisition; hierarchical syntax; L2 learning; musical training; transfer effects
Music has long been associated with trance states, but very little has been written about the modern western discussion of music as a form of hypnosis or ‘brainwashing’. However, from Mesmer's use of the glass armonica to the supposed dangers of subliminal messages in heavy metal, the idea that music can overwhelm listeners' self-control has been a recurrent theme. In particular, the concepts of automatic response and conditioned reflex have been the basis for a model of physiological psychology in which the self has been depicted as vulnerable to external stimuli such as music. This article will examine the discourse of hypnotic music from animal magnetism and the experimental hypnosis of the nineteenth century to the brainwashing panics since the Cold War, looking at the relationship between concerns about hypnotic music and the politics of the self and sexuality.
Music; hypnosis; Charcot; brainwashing; mesmerism
Overuse (injury) syndrome, common in musicians, is characterised by persisting pain and tenderness in the muscles and joint ligaments of the upper limb due to excessive use and in more advanced instances by weakness and loss of response and control in the affected muscle groups. This occurs typically in tertiary music students when their practice load is raised. In seven Australian performing music schools the minimum prevalence of the condition was found to be 9.3%. In two music schools where the study was more controlled the incidences were 13% and 21%. The factors leading to overuse (injury) syndrome may be identified as follows: the genetic factor of vulnerability which cannot be altered; the student's technique which may be influenced by teaching and application so that it is more "energy efficient"; and the time X intensity of practice which is totally within the student's control. Prevention involves education of staff and students about the overuse process, rationalisation of practice habits and repertoire, abolition or reduction of static loading of the weight of the instruments, and earlier reporting when the problem is most easily corrected. Psychological problems arising in this syndrome appeared to occur as a reaction to the condition rather than as a causal factor.
Congenital amusia is a neurodevelopmental disorder that affects about 3% of the adult population. Adults experiencing this musical disorder in the absence of macroscopically visible brain injury are described as cases of congenital amusia under the assumption that the musical deficits have been present from birth. Here, we show that this disorder can be expressed in the developing brain. We found that (10–13 year-old) children exhibit a marked deficit in the detection of fine-grained pitch differences in both musical and acoustical context in comparison to their normally developing peers comparable in age and general intelligence. This behavioral deficit could be traced down to their abnormal P300 brain responses to the detection of subtle pitch changes. The altered pattern of electrical activity does not seem to arise from an anomalous functioning of the auditory cortex, because all early components of the brain potentials, the N100, the MMN, and the P200 appear normal. Rather, the brain and behavioral measures point to disrupted information propagation from the auditory cortex to other cortical regions. Furthermore, the behavioral and neural manifestations of the disorder remained unchanged after 4 weeks of daily musical listening. These results show that congenital amusia can be detected in childhood despite regular musical exposure and normal intellectual functioning.
To address the problem of mixing in EEG or MEG connectivity analysis we exploit that noninteracting brain sources do not contribute systematically to the imaginary part of the cross-spectrum. Firstly, we propose to apply the existing subspace method “RAP-MUSIC” to the subspace found from the dominant singular vectors of the imaginary part of the cross-spectrum rather than to the conventionally used covariance matrix. Secondly, to estimate the specific sources interacting with each other, we use a modified LCMV-beamformer approach in which the source direction for each voxel was determined by maximizing the imaginary coherence with respect to a given reference. These two methods are applicable in this form only if the number of interacting sources is even, because odd-dimensional subspaces collapse to even-dimensional ones. Simulations show that (a) RAP-MUSIC based on the imaginary part of the cross-spectrum accurately finds the correct source locations, that (b) conventional RAP-MUSIC fails to do so since it is highly influenced by noninteracting sources, and that (c) the second method correctly identifies those sources which are interacting with the reference. The methods are also applied to real data for a motor paradigm, resulting in the localization of four interacting sources presumably in sensory-motor areas.
Speech and music are highly complex signals that have many shared acoustic features. Pitch, Timbre, and Timing can be used as overarching perceptual categories for describing these shared properties. The acoustic cues contributing to these percepts also have distinct subcortical representations which can be selectively enhanced or degraded in different populations. Musically trained subjects are found to have enhanced subcortical representations of pitch, timbre, and timing. The effects of musical experience on subcortical auditory processing are pervasive and extend beyond music to the domains of language and emotion. The sensory malleability of the neural encoding of pitch, timbre, and timing can be affected by lifelong experience and short-term training. This conceptual framework and supporting data can be applied to consider sensory learning of speech and music through a hearing aid or cochlear implant.
brain stem; subcortical; musical training; cochlear implant
Understanding emotions is fundamental to our ability to navigate and thrive in a complex
world of human social interaction. Individuals with Autism Spectrum Disorders (ASD) are known to
experience difficulties with the communication and understanding of emotion, such as the nonverbal
expression of emotion and the interpretation of emotions of others from facial expressions and body
language. These deficits often lead to loneliness and isolation from peers, and social withdrawal from
the environment in general. In the case of music however, there is evidence to suggest that individuals
with ASD do not have difficulties recognizing simple emotions. In addition, individuals with ASD have
been found to show normal and even superior abilities with specific aspects of music processing, and
often show strong preferences towards music. It is possible these varying abilities with different types of
expressive communication may be related to a neural system referred to as the mirror neuron system
(MNS), which has been proposed as deficient in individuals with autism. Music’s power to stimulate
emotions and intensify our social experiences might activate the MNS in individuals with ASD, and thus
provide a neural foundation for music as an effective therapeutic tool. In this review, we present
literature on the ontogeny of emotion processing in typical development and in individuals with ASD,
with a focus on the case of music.
limbic system; mirror neurons; music; autism; social interaction
The opinions of others can easily affect how much we value things. We investigated what happens in our brain when we agree with others about the value of an object and whether or not there is evidence, at the neural level, for social conformity through which we change object valuation. Using functional magnetic resonance imaging we independently modeled (1) learning reviewer opinions about a piece of music, (2) reward value while receiving a token for that music, and (3) their interaction in 28 healthy adults. We show that agreement with two “expert” reviewers on music choice produces activity in a region of ventral striatum that also responds when receiving a valued object. It is known that the magnitude of activity in the ventral striatum reflects the value of reward-predicting stimuli [1–8]. We show that social influence on the value of an object is associated with the magnitude of the ventral striatum response to receiving it. This finding provides clear evidence that social influence mediates very basic value signals in known reinforcement learning circuitry [9–12]. Influence at such a low level could contribute to rapid learning and the swift spread of values throughout a population.
► Agreement with reviewers activates the same neural circuitry as object rewards ► Basic signals of a reward are modulated by social influence on its value ► Strongly influenced individuals produce more neural responses to disagreement ► The anterior insula cortex responds to unanimous opinions of others
Study Objectives. Studies have found that depictions of unhealthy behaviors (e.g., illicit substance use, violence) are common in popular music lyrics; however, we are unaware of any studies that have specifically analyzed the content of music lyrics for unhealthy sleep-related behaviors. We sought to determine whether behaviors known to perpetuate insomnia symptoms are commonly depicted in the lyrics of popular music. Methods. We searched three online lyrics sites for lyrics with the word “insomnia” in the title and performed content analysis of each of the lyrics. Lyrics were analyzed for the presence/absence of the following perpetuating factors: extending sleep opportunity, using counter fatigue measures, self-medicating, and engaging in rituals or anti-stimulus control behaviors. Results. We analyzed 83 music lyrics. 47% described one or more perpetuating factor. 30% described individual(s) engaging in rituals or antistimulus control strategies, 24% described self-medicating, 7% described engaging in counter fatigue measures, and 2% described extending sleep opportunity (e.g., napping during daytime). Conclusion. Maladaptive strategies known to perpetuate insomnia symptoms are common in popular music. Our results suggest that listeners of these sleep-related songs are frequently exposed to lyrics that depict maladaptive coping mechanisms. Additional studies are needed to examine the direct effects of exposing individuals to music lyrics with this content.
Although physical activity (PA) is universally recommended, most adults are not regular exercisers. Interactive video dance is a novel form of PA in widespread use among young adults, but interest among adults is not known. Postmenopausal women are an appropriate target for interventions to promote PA because they have an increased risk of health problems related to sedentary behavior. We explored perceived advantages and disadvantages of video dance as a personal exercise option in postmenopausal women.
Forty sedentary postmenopausal women (mean age ± SD 57 ± 5 years), were oriented in eight small groups to interactive video dance, which uses a force-sensing pad with directional panels: the player steps on the panels in response to arrows scrolling on a screen, synchronized to music. Perceived advantages and disadvantages were elicited through a nominal group technique (NGT) process.
Participants generated 113 advantages and 71 disadvantages. The most frequently cited advantages were “it's fun” and “improves coordination” (seven of eight groups), the fact that challenge encourages progress (five of eight groups), the potential for weight loss (four of eight groups), and the flexibility of exercise conditions (three of eight groups). Concerns were the potentially long and frustrating learning process, cost (six of eight groups), and possible technical issues (two of eight groups).
The recreational nature of interactive dance exercise was widely appealing to postmenopausal women and might help promote adherence to PA. Initial support to learn basic technical and movement skills may be needed.
Music goes back a very long way in human experience. Music therapy is now used in many disparate areas—from coronary care units to rehabilitation after a stroke. But its widespread adoption has a poor scientific evidence base, founded more on enthusiasm than on proper evaluation in any controlled way. This has led to a lack of clarity about whether any particular type of music is superior, or whether different types of music should be tailored to differing individuals. We therefore conducted a series of controlled studies in which we examined the effects of different styles of music—from raga to jazz—presented in random order to normal young subjects (both musically trained or not). We found that contrary to many beliefs the effect of a style of music was similar in all subjects, whatever their individual music taste. We also found that this effect appeared to operate at a sub-conscious level through the autonomic nervous system. Furthermore, musical or verbal phrases of a 10 s duration (which coincided with the normal circulatory ‘Mayer’ waves) induced bigger excursions in blood pressure and heart rate (reciprocal of pulse interval) and so triggered more vagal slowing and feelings of calm. These findings need to now be tested in the clinical setting since, if confirmed, this would greatly simplify the practical use of this promising tool.
Neurology; Mayer waves; Cardiovascular control; Human baroreflex
Artistic creativity forms the basis of music culture and music industry. Composing, improvising and arranging music are complex creative functions of the human brain, which biological value remains unknown. We hypothesized that practicing music is social communication that needs musical aptitude and even creativity in music. In order to understand the neurobiological basis of music in human evolution and communication we analyzed polymorphisms of the arginine vasopressin receptor 1A (AVPR1A), serotonin transporter (SLC6A4), catecol-O-methyltranferase (COMT), dopamin receptor D2 (DRD2) and tyrosine hydroxylase 1 (TPH1), genes associated with social bonding and cognitive functions in 19 Finnish families (n = 343 members) with professional musicians and/or active amateurs. All family members were tested for musical aptitude using the auditory structuring ability test (Karma Music test; KMT) and Carl Seashores tests for pitch (SP) and for time (ST). Data on creativity in music (composing, improvising and/or arranging music) was surveyed using a web-based questionnaire. Here we show for the first time that creative functions in music have a strong genetic component (h2 = .84; composing h2 = .40; arranging h2 = .46; improvising h2 = .62) in Finnish multigenerational families. We also show that high music test scores are significantly associated with creative functions in music (p<.0001). We discovered an overall haplotype association with AVPR1A gene (markers RS1 and RS3) and KMT (p = 0.0008; corrected p = 0.00002), SP (p = 0.0261; corrected p = 0.0072) and combined music test scores (COMB) (p = 0.0056; corrected p = 0.0006). AVPR1A haplotype AVR+RS1 further suggested a positive association with ST (p = 0.0038; corrected p = 0.00184) and COMB (p = 0.0083; corrected p = 0.0040) using haplotype-based association test HBAT. The results suggest that the neurobiology of music perception and production is likely to be related to the pathways affecting intrinsic attachment behavior.
To investigate the neural substrates that underlie spontaneous musical performance, we examined improvisation in professional jazz pianists using functional MRI. By employing two paradigms that differed widely in musical complexity, we found that improvisation (compared to production of over-learned musical sequences) was consistently characterized by a dissociated pattern of activity in the prefrontal cortex: extensive deactivation of dorsolateral prefrontal and lateral orbital regions with focal activation of the medial prefrontal (frontal polar) cortex. Such a pattern may reflect a combination of psychological processes required for spontaneous improvisation, in which internally motivated, stimulus-independent behaviors unfold in the absence of central processes that typically mediate self-monitoring and conscious volitional control of ongoing performance. Changes in prefrontal activity during improvisation were accompanied by widespread activation of neocortical sensorimotor areas (that mediate the organization and execution of musical performance) as well as deactivation of limbic structures (that regulate motivation and emotional tone). This distributed neural pattern may provide a cognitive context that enables the emergence of spontaneous creative activity.
A critical issue in perception is the manner in which top-down expectancies guide lower-level perceptual processes. In speech, a common paradigm is to construct continua ranging between two phonetic endpoints and to determine how higher level lexical context influences the perceived boundary. We applied this approach to music, presenting subjects with major/minor triad continua after brief musical contexts. Two experiments yielded results that differed from classic results in speech perception. In speech, context generally expands the category of the expected stimuli. We found the opposite in music: the major/minor boundary shifted toward the expected category, contracting it. Together, these experiments support the hypothesis that musical expectancy can feed back to affect lower-level perceptual processes. However, it may do so in a way that differs fundamentally from what has been seen in other domains.
Music Perception; Speech Perception; Context Effects; Perceptual Categorization; Interactive Activation; Feedback; Triad Identification