|Home | About | Journals | Submit | Contact Us | Français|
Music is a cross-cultural universal, a ubiquitous activity found in every known human culture. Individuals demonstrate manifestly different preferences in music, and yet relatively little is known about the underlying structure of those preferences. Here, we introduce a model of musical preferences based on listeners’ affective reactions to excerpts of music from a wide variety of musical genres. The findings from three independent studies converged to suggest that there exists a latent five-factor structure underlying music preferences that is genre-free, and reflects primarily emotional/affective responses to music. We have interpreted and labeled these factors as: 1) a Mellow factor comprising smooth and relaxing styles; 2) an Urban factor defined largely by rhythmic and percussive music, such as is found in rap, funk, and acid jazz; 3) a Sophisticated factor that includes classical, operatic, world, and jazz; 4) an Intense factor defined by loud, forceful, and energetic music; and 5) a Campestral factor comprising a variety of different styles of direct, and rootsy music such as is often found in country and singer-songwriter genres. The findings from a fourth study suggest that preferences for the MUSIC factors are affected by both the social and auditory characteristics of the music.
Music is everywhere we go. It is piped into retail shops, airports, and train stations. It accompanies movies, television programs, and ball games. Manufacturers use it to sell their products, while yoga, massage, and exercise studios use it to relax or invigorate their clients. In addition to all of these uses of music as a background, a form of sonic wallpaper imposed on us by others, many of us seek out music for our own listening – indeed, Americans spend more on music than they do on prescription drugs (Huron, 2001). Taken together, background and intentional music listening add up to more than 5 hours a day of exposure to music for the average American (Levitin, 2006; McCormick, 2009).
When it comes to self-selected music, individuals demonstrate manifestly different tastes. Remarkably, however, little is known about the underlying principles on which such individual musical preferences are based. A challenge to such an investigation is that music is used for many different purposes. One common use of music in contemporary society is pure enjoyment and aesthetic appreciation (Kohut & Levarie, 1950), another common use relates to music’s ability to inspire dance and physical movement (Dwyer, 1995; Large, 2000; Ronström, 1999). Many individuals also use music functionally, for mood regulation and enhancement (North & Hargreaves, 1996b; Rentfrow & Gosling, 2003; Roe, 1985). Adolescents report that they use music for a distraction from troubles, a means of mood management, for reducing loneliness, and as a badge of identity for inter- and intragroup self-definition (Bleich, Zillman & Weaver, 1991; Rentfrow & Gosling, 2006; 2007; Rentfrow, McDonald, & Oldmeadow, 2009; Zillmann & Gan, 1997). As adolescents and young adults, we tend to listen to music that our friends listen to, and this contributes to defining our social identity as well as our adult musical tastes and preferences (Creed & Scully, 2000; North & Hargreaves, 1999; Tekman & Hortaçsu, 2002).
Music is also used to enhance concentration and cognitive function, to maintain alertness and vigilance (Emery, Hsiao, Hill, & Frid, 2003; Penn & Bootzin, 1990; Schellenberg, 2004) and increase worker productivity (Newman, Hunt & Rhodes, 1966); moreover, it may have the ability to enhance certain cognitive networks by the way in which it is organized (Richard, Toukhsati, & Field, 2005). Social and protest movements use music for motivation, group cohesion, and to focus their goals and message (Eyerman & Jamison, 1998), and music therapists encourage patients to choose music to meet various therapeutic goals (Davis, Gfeller & Thaut, 1999; Särkamö, et al., 2008). Historically, music has also been used for social bonding, comfort, motivating or coordinating physical labor, the preservation and transmission of oral knowledge, ritual and religion, and the expression of physical or cognitive fitness (for a review, see Levitin, 2008).
Despite the wide variety of functions music serves, a starting point for this article is the assumption that it should be possible to characterize a given individual’s musical preferences or tastes overall, across this wide variety of uses. Although music has received relatively little attention in mainstream social and personality psychology, recent investigations have begun to examine individual differences in music preferences (for a review, see Rentfrow & McDonald, 2009). Results from these investigations suggest that there exists a structure underlying music preferences, with fairly similar music-preference factors emerging across studies. Independent investigations (e.g., Colley, 2008; Delsing, ter Bogt, Engels, & Meeus, 2008; Rentfrow & Gosling, 2003) have also identified similar patterns of relations between the music-preference dimensions and various psychological constructs. The degree of convergence across those studies is encouraging because it suggests that the psychological basis for music preferences is firm. However, despite the consistency, it is not entirely what it is about music that attracts people. Is there something inherent in music that influences people’s preferences? Or, are music preferences shaped by social factors?
The aim of the present research is to inform our understanding of the nature of music preferences. Specifically, we argue that research on individual differences in music preferences has been limited by conceptual and methodological constraints that have hindered our understanding of the psychological and social factors underlying preferences in music. This work aims to correct these shortcomings with the goal of advancing theory and research on this important topic.
Cattell and Anderson (1953) conducted one of the first investigations of individual differences in music preferences. Their aim was to develop a method for assessing dimensions of unconscious personality traits. Accordingly, Cattell and his colleagues developed a music preference test consisting of 120 classical and jazz music excerpts, to which respondents reported their degree of liking for each of the excerpts (Cattell & Anderson, 1953; Cattell & Saunders, 1954). These investigators attempted to interpret 12 factors, which they explained in terms of unconscious personality traits. For example, musical excerpts with fast tempos defined one factor, labeled surgency, and excerpts characterized by melancholy and slow tempos defined another factor, labeled sensitivity. Cattell’s music-preference measure never gained traction, but his results were among the first to suggest a latent structure to music preferences.
It was not until some 50 years later that research on individual differences in music preferences resurfaced. However, whereas Cattell and his colleagues assumed that music preferences reflected unconscious motives, urges, and desires (Cattell & Anderson, 1953; Cattell & Saunders, 1954), the contemporary view is that music preferences are manifestations of explicit psychological traits, possibly in interaction with specific situational experiences, needs, or constraints. More specifically, current research on music preferences draws from interactionist theories (e.g., Buss, 1987; Swann, Rentfrow, & Guinn, 2002) by hypothesizing that people seek musical environments that reinforce and reflect their personalities, attitudes, and emotions.
As a starting point for testing that hypothesis, researchers have begun to map the landscape of music-genre preferences with the aim of identifying its structure. For example, Rentfrow and Gosling (2003) examined individual differences in preferences for 14 broad music genres in three US samples. Results from all three studies converged to reveal four music-preference factors that were labeled reflective & complex (comprising classical, jazz, folk, and blues genres), intense & rebellious (rock, alternative, heavy metal), upbeat & conventional (country, pop, soundtracks, religious), and energetic & rhythmic (rap, soul, electronica). In a study of music preferences among Dutch adolescents, Delsing and colleagues (Delsing, et al., 2008) assessed self-reported preferences for 11 music genres. Their analyses also revealed four preference factors, labeled rock (comprising rock, heavy metal/hardrock, punk/harcore/grunge, gothic), elite (classical, jazz, gospel), urban (hip-hop/rap, soul/r&b), and pop (trance/techno, top 40/charts). And Colley (2008) investigated self-reported preferences for 11 music genres in a small sample of British university students. Her results revealed four factors for women and five for men. Specifically, three factors, sophisticated (comprising classical, blues, jazz, opera), heavy (rock, heavy metal), and rebellious (rap, reggae), emerged for both men and women, but the mainstream (country, folk, chart pop) factor that emerged for women split into traditional (country, folk) and pop (chart pop) for men.
However, not all studies of music preference structure have obtained as similar findings. For example, George, Stickle, Rachid, and Wopnford (2007) studied individual differences in preferences for 30 music genres in sample of Canadian adults. Their analyses revealed nine music-preference factors, labeled rebellious (grunge, heavy metal, punk, alternative, classic rock), classical (piano, choral, classical instrumental, opera/ballet, Disney/broadway), rhythmic & intense (hip-hop & rap, pop, rhythm & blues, reggae), easy listening (country, 20th century popular, soft rock, disco folk/ethnic, swing), fringe (new age, electronic, ambient, techno), contemporary Christian (soft contemporary Christian, hard contemporary Christian), jazz & blues (blues, jazz), and traditional Christian (hymns & southern gospel, gospel). In a study involving German young adults, Schäfer and Sedlmeier (2009) assessed individual differences in self-reported preferences for 25 music genres. Results from their analyses uncovered six music-preference factors, labeled sophisticated (comprising classical, jazz, blues, swing), electronic (techno, trance, house, dance), rock (rock, punk, metal, alternative, gothic, ska), rap (rap, hip hop, reggae), pop (pop, soul, r&b, gospel), and beat, folk, & country (beat, folk, country, rock’n’roll). And in a study involving participants mainly from the Netherlands, Dunn (in press) examined individual differences in preferences for 14 music genres and reported six music-preference factors, labeled rhythm’n blues (comprising jazz, blues, soul), hard rock (rock, heavy metal, alternative), bass heavy (rap, dance), country (country, folk), soft rock (pop, soundtracks), and classical (classical, religious).
Even though the results are not identical, there does appear to be a considerable degree of convergence across these studies. Indeed, in every sample three factors emerged that were very similar: One factor was defined mainly by classical and jazz music; another factor was defined largely by rock and heavy metal music; and the third factor was defined by rap and hip-hop music. There was also a factor comprising mainly country music that emerged in all the samples in which singer-songwriter or story-telling music was included (i.e., six of seven samples). And in half the studies there was a factor composed mostly of new age and electronic styles of music. Thus, there appears to be at least four or perhaps five robust music-preference factors.
Although research on individual differences in music preferences has revealed some consistent findings, there are significant limitations that impede theoretical progress in the area. One limitation is based on the fact that there is no consensus about which music genres to study. Indeed, few researchers even appear to use systematic methods to select genres or even provide explanations about how it was decided which genres to study. Consequently, different researchers focus on different music genres, with some studying as few as 11 (Colley, 2008; Delsing, et al., 2008) and others as many as 30 genres (George, et al., 2007). Ultimately, these different foci yield inconsistent findings and make it difficult to compare results across studies.
Another significant limitation stems from the reliance on music genres as the unit for assessing preferences. This is a problem because genres are extremely broad and ill-defined categories, so measurements based solely on genres are necessarily crude and imprecise. Furthermore, not all pieces of music fit neatly into a single genre. Many artists and pieces of music are genre defying or cross multiple genres, so genre categories do not apply equally well to every piece of music. Assessing preferences from genres is also problematic because it assumes that participants are sufficiently knowledgeable with every music genre that they can provide fully informed reports of their preferences. This is potentially problematic for comparing preferences across different age groups where people from older generations, for instance, may be unfamiliar with the new styles of music enjoyed by young people. Genre-based measures also assume that participants share a similar understanding of the genres. This is an obstacle for research comparing preferences from people in different socioeconomic groups or cultures because certain musical styles may have different social connotations in different regions or countries. Finally, there is evidence that some music genres are associated with clearly defined social stereotypes (Rentfrow, et al., 2009; Rentfrow & Gosling, 2007), which makes it difficult to know whether assessments based on music genres reflect preferences for intrinsic properties of a particular style of music or for the social connotations that are attached to it.
These methodological limitations have thwarted theoretical progress in the social and personality psychology of music. Indeed, much of the research has identified groups of music genres that covary, but we do not know why those genres covary. Why do people who like jazz also like classical music? Why are preferences for rock, heavy metal, and punk music highly related to each other? Is there something about the loudness, structure, or intensity of the music? Do those styles of music share similar social and cultural associations? Moreover, we do not know what it is about people’s preferred music that appeals to them. Are there particular sounds or instruments that guide preferences? Do people prefer music with a particular emotional valence or level of energy? Are people drawn to music that has desirable social overtones? Such questions need to be addressed if we are to develop a complete understanding of the social and psychological factors that shape music preferences. But how should music preferences be conceptualized if we are to address these questions?
Music is multifaceted: it is composed of specific auditory properties, communicates emotions, and has strong social connotations. There is evidence from research concerned with various social, psychological, and physiological aspects of music, not with music preferences per se, suggesting that preferences are tied to various musical facets. For example, there is evidence of individual differences in preferences for vocal as opposed to instrumental music, fast vs. slow music, and loud vs. soft music (Rentfrow & Gosling, 2006; Kopacz, 2005; McCown, Keiser, Mulhearn, & Williamson, 1997; McNamara & Ballard 1999). Such preferences have been shown to relate to personality traits such as Extraversion, Neuroticism, Psychoticism, and sensation seeking. Research on music and emotion has revealed individual differences in preferences for pieces of music that evoke emotions like happiness, joy, sadness, and anger (Chamorro-Premuzic & Furnham, 2007; Rickard, 2004; Schellenberg, Peretz, & Vieillard, 2008; Zentner, Grandjean, & Scherer, 2008). And research on music and identity suggests that some people are drawn to musical styles with particular social connotations, such as toughness, rebellion, distinctiveness, and sophistication (Abrams, 2009; Schwartz & Fouts, 2003; Tekman & Hortaçsu, 2002).
These studies suggest that we should broaden our conceptualization of music preferences to include the intrinsic properties, or attributes, as well as external associations of music. Indeed, if there are individual differences in preferences for instrumental music, melancholic music, or music regarded as sophisticated, such information needs to be taken into account. How should preferences be assessed so that both external and intrinsic musical properties are captured?
There are good reasons to believe that self-reported preferences for music genres reflect, at least partially, preferences for external properties of music. Indeed, research has found that individuals, particularly young people, have strong stereotypes about fans of certain music genres. Specifically, Rentfrow and colleagues (Rentfrow et al., 2009; Rentfrow & Gosling, 2007) found that adolescents and young adults who were asked to evaluate the prototypical fan of a particular music genre displayed significant levels of inter-judge agreement for several genres (e.g., classical, rap, heavy metal, country), suggesting that participants held very similar beliefs about the social and psychological characteristics of such fans. Furthermore, research on the validity of the music stereotypes suggested that fans of certain genres reported possessing many of the stereotyped characteristics. Thus, it would seem that genres alone can activate stereotypes that are associated with a suite of traits, which could, in turn, influence individuals’ stated musical preferences.
There are a variety of ways in which intrinsic musical properties could be measured. One approach would involve manipulating audio clips of musical pieces to emphasize specific attributes or emotional tones. For instance, respondents could report their preferences for clips engineered to be fast, distorted, or loud. McCown et al. (1997) used this approach to investigate preferences for exaggerated bass in music by playing respondents two versions of the same song: one version with amplified bass and one with deliberately flat bass. Though such procedures certainly yield useful information, a song never possesses only one characteristic, but several. As Hevner (1935) pointed out, hearing isolated chords or modified music is not the same as listening to music as it was originally intended, which usually involves an accumulation of musical elements to be expressed and interpreted as a whole. A more ecologically valid way to assess music preferences would be to present audio recordings of real pieces of music.
Indeed, measuring affective reactions to excerpts of real music has a number of advantages. One advantage of using authentic music, as opposed to music manufactured for an experiment, is that it is much more likely to represent the music people encounter in their daily lives. Another important advantage is that each piece of music can be coded on a range of musical qualities. For example, each piece can be coded on music-specific attributes, like tempo, instrumentation, and loudness, as well as on psychological attributes, such as joy, anger, and sadness. Furthermore, using musical excerpts overcomes several of the problems associated with genre-based measures because excerpts are far more specific than genres, and respondents need not have any knowledge of genre categories in order to indicate their degree of liking for a musical excerpt. Thus, it seems that preferences for musical excerpts would provide a rich and ecologically valid representation of music preferences that capture both external and intrinsic musical properties.
The goal of the present research is to broaden our understanding of the factors that shape the music preferences of ordinary music listeners, as opposed to trained musicians. Past work on individual differences in music preferences focused on genres, but genres are limited in several ways that ultimately hinder theoretical progress in this area. This research was intended to rectify those problems by developing a more nuanced assessment of music preferences. Previous work suggests that audio excerpts of authentic music would aid the development of such an assessment. Thus, the objective of the present research was to investigate the structure of affective reactions to audio excerpts of music, with the aim of identifying a robust factor structure.
Using multiple pieces of music, methods, samples, and recruitment strategies, four studies were conducted to achieve that objective. In Study 1, we assessed preferences for audio excerpts of commercially released, but not well-known music in a sample of Internet users. To assess the stability of the results, a follow-up study was conducted using a subsample of participants. Study 2 also used Internet methods, but unlike Study 1, preferences were assessed for pieces of music that had never been released to the public, and to which we purchased the copyright. In Study 3 we examined music preferences among a sample of university students using a subset of the pieces of music from Study 2. In Study 4 the pieces of music from the previous studies were coded on several musical attributes and analyzed in order to examine the intrinsic properties and external associations that influence the structure of music preferences.
The objective of Study 1 was to determine whether there is an interpretable structure underlying preferences for excerpts of recorded music. As noted previously, although past research on music-genre preferences has reported slightly different factor structures, there is some evidence for four to five music-preference factors. Therefore, in the present study, we expected to identify at least four factors. Although we had some ideas about how many factors to expect, we used exploratory factor-analytic techniques to examine the hierarchical structure of music preferences without any a priori bias or constraints.
We wanted to assess preferences among a representative sample of music listeners as opposed to a sample of university students, which is the population typically studied in music preference research. So we recruited participants over the Internet to participate in a study concerned with psychology and music. Additionally, to determine the stability of the results, we used a subsample of participants to examine generalizability of the music factors across methods and over time.
In the Spring of 2007, advertisements were placed in several locations on the Internet (e.g., Craigslist.com) inviting people to participate in an Internet-based study of personality, attitudes, and preferences. In recruitment, we sought to obtain a wider, more heterogeneous cross-section of respondents than is typically found in such studies, which tend to employ university undergraduates. Approximately 1,600 individuals responded to the advertisement and provided their email addresses. They were then contacted and told that participation entailed completing several surveys on separate occasions, one of which included our music preference measure. Those who agreed to participate were directed to a Webpage where they could begin the first survey. After completing each survey, they were informed that they would receive an e-mail message within a few days with a hyperlink that would direct them to the next survey. Participants who completed all surveys received a $25 gift certificate to Amazon.com.
A total of 706 participants completed the music preference measure. Of those who indicated, 452 (68%) were female and 216 (32%) were male. The median age of participants was 31. Of those who reported their level of education, 27 (4%) had not completed high school, 406 (62%) completed high school or vocational school, 177 (27%) had a college degree and/or some post-college education, and 48 (7%) had a post-college degree. This sample met our goals of obtaining a broad representation of age groups and educational background.
Our objective was to assess individual differences in preferences for the many different styles of music that people are likely encounter in their everyday lives. So it was crucial that we cast as wide a net as possible in selecting musical pieces in order to cover as much of the musical space as possible. Because the music space is vast, it was necessary that we develop a systematic procedure for choosing musical pieces to ensure that we covered as much of that space as possible. We thus developed a multi-step procedure to select musical pieces.
Our first step was to identify broad musical styles that appeal to most people. To that end, a sample of 5,000 participants who responded to an Internet advertisement, plus a sample of 600 university students, filled out an open-ended questionnaire to name their favorite music genres (e.g. “rock”) and subgenres (e.g. “classic rock,” “alternative rock”) and examples of music for each one. From this, we identified 23 genres and subgenres that occurred on lists most often. In some cases, experimenter judgment was required (e.g., AC/DC was termed “heavy metal” by some and “classic rock” by others) in order to create coherent categories. To this list of 23, we added three sub-genres that were mentioned only a small number of times in our pilot study, because our aim was to cover as wide a range of musical styles as possible and we were concerned that these may have been omitted due to a pre-selection effect (Internet users and college students are not necessarily representative of all music listeners). Therefore, for the sake of completeness, we added polka, marching band, and avant-garde classical. Examples of those sub-genres that appeared on a moderate number of lists and that we did not include are Swedish death metal, West Coast rap, Bebop, Psychedelic rock, and Baroque. We folded these into the categories of heavy metal, rap, jazz, classic rock, and classical, respectively.
The next step involved obtaining musical exemplars for the 26 music subgenres. There is evidence that well-known pieces of music can serve as powerful cues to autobiographical memories (Janata, Tomic & Rakowski, 2007) and that familiar music tends to be liked more than unfamiliar music (Dunn, in press; North & Hargreaves, 1995). Because we were interested in affective reactions only in response to the musical stimuli, we needed to reduce the possibility of obtaining preference ratings contaminated by idiosyncratic personal histories. We therefore required that the exemplars were of unknown pieces of music.
Our aim in selecting exemplars was not to find pieces of music from obscure artists necessarily, but pieces that were of a similar quality to hits and yet were unknown. To accomplish this, we consulted ten professionals – musicologists and recording industry veterans – to identify representative or prototypical pieces for each of the 26 sub-genres. We instructed them to choose major-record-label music that had been commercially released, but that achieved only low sales figures, so it was unlikely to have been heard previously by our participants. This created a set of pieces that had been through all of the many steps prior to commercialization that more popular music had gone through – being discovered by a talent scout, being signed to a label, selecting the best piece with an artists and repertoire executive, and recording in a professional studio with a professional production team. Most of these selections were clearly not well known: Booney James, Meav, and Cat's Choir; And a few pieces were recorded by better-known artists (Kenny Rankin, Karla Bonoff, Dean Martin) but the pieces themselves were not hits, nor were they taken from albums that had been hits. This procedure generated several exemplars for each subgenre.
Next we reduced the lists of exemplars for each subgenre by collecting validation data from a pilot sample. Specifically, excerpts of the musical pieces were presented in random order to 500 listeners, recruited over the Internet, who were asked to (a) name the genre or sub-genre that they felt best represented the musical piece, and (b) to indicate, on a scale of 1 – 9, how well they thought each piece represented the genre or sub-genre they had chosen. Using the results from this pilot test we chose the two musical pieces that were rated as most prototypical of each music category, which resulted in 52 excerpts altogether (2 for each of the 26 subgenres).
Thus, we measured music preferences by asking participants to indicate their degree of liking for each of the 52 musical excerpts using a nine-point rating scale, with endpoints at 1 (Not at all) and 9 (Very much). The stimuli were 15-second excerpts from 52 different pieces of music, digitized and played over a computer as MP3 files. The complete list of pieces presented appears in Table 1.
Multiple criteria were used to decide how many factors to retain: parallel analyses of Monte Carlo simulations, replicability across factor-extraction methods, and factor interpretability. Principal-components analysis (PCA) with varimax rotation yielded a substantial first factor that accounted for 27% of the variance, reflecting individual differences in general preferences for music. Parallel analysis of random data suggested that the first five eigenvalues were greater than chance. Examination of the scree plot suggested an “elbow” at roughly six factors. Successive PCAs with varimax rotation were then performed for one-factor through six-factor solutions. In the six-factor solution, the sixth factor was comparatively small with low-saturation items. Altogether these analyses suggested that we retain no more than five broad music-preference factors.
To determine whether the factors were invariant across methods, we examined the convergence between orthogonally rotated factor scores from PCA, principle-axis (PA), and maximum-likelihood (ML) extraction procedures. Specifically, PCAs, PAs, and MLs were performed for one- through five-factor solutions; the factor scores for each solution were then intercorrelated. The results revealed very high convergence across the three extraction methods, with correlations averaging above .99 between the PCA and PA factors, .99 between the PCA- and ML factors, and above .99 between the PA and ML factors. These results indicate that the same solutions would be obtained regardless of the particular factor-extraction method that was used. As PCAs yield exact and perfectly orthogonal factor scores, solutions derived from PCAs are reported in this article.
We next examined the hierarchical structure of the one- through five-factor solutions using the procedure proposed by Goldberg (2006). First, a single factor was specified in a PCA and then in four subsequent PCAs we specified two, three, four, and five orthogonally rotated factors. The factor scores were saved for each solution. Next, correlations between factor scores at adjacent levels were computed. The resulting hierarchical structure is displayed in Figure 1.
There are several noteworthy findings that can be seen in this figure. The factors in the two-factor solution resemble the well-documented “Highbrow” (or Sophisticated) and “Lowbrow” music-preference dimensions; the excerpts with high loadings on the “Sophisticated/aesthetic” factor were drawn mainly from classical, jazz, and world music. This factor remained virtually unchanged through the three-, four-, and five-factor solutions. The excerpts with high loadings on the Lowbrow factor were predominately country, heavy metal, and rap. In the three-factor solution, this factor then split into subfactors that appear to differentiate music based on its forcefulness or intensity. The “Intense/aggressive” factor comprised heavy metal, punk, and rock excerpts, and remained fully intact through the four- and five-factor solutions. The less intense factor comprised excerpts from the country, rock-n-roll (early rock, rockabilly), and pop genres, and these first two music types remained consistent through the four- and five-factor solutions, at which point we labeled the factor “Campestral/sincere.” In the four-factor solution, a “Mellow/relaxing” factor emerged that comprised predominately pop, soft-rock, and soul/R&B excerpts. That factor remained in the five-factor solution, where an “Urban/danceable” factor emerged which included mainly rap and electronica music.
Although the factors depicted in Figure 1 are clear and interpretable, some of them (e.g., Urban, Mellow) might be driven by demographic differences in gender and/or age. This is a particularly important issue for music-preference research because some music might appeal more or less to men than to women (e.g., punk and soul, respectively), or more or less to younger people than to older people (e.g., electronica and classic rock, respectively).
To test whether the music preference structure was influenced by the demographics of the participants, we compared the factor structure based on the original preference ratings with the structure derived from residualized musical ratings, from which sex and age were statistically removed. Specifically, we conducted a PCA with varimax rotation on the residualized musical ratings and specified a five-factor solution. The factor structure derived from the residualized ratings was virtually identical to the one derived from the original musical ratings, with factor congruence coefficients ranging from .99 (Urban) to over .999 (Sophisticated). Furthermore, analyses of the correlations between the corresponding factor scores derived from the original and the residualized ratings revealed high convergence for all of the factors, with convergent correlations ranging from .96 (Urban) to .99 (Mellow). These results indicate that even though there are significant sex and age differences in preferences for specific pieces of music, the factors underlying music preferences are invariant to gender and age effects. Table 1 provides the factor loadings for the five music-preference factors.
Close inspection of the excerpts that loaded strongly on each factor indicated that most of those on the Sophisticated factor were recordings of instrumental jazz, classical, and world music, whereas the majority of the excerpts on the other factors included vocals. This confound obscures the meaning of the factors, particularly the Sophisticated factor, because it was not clear whether the factors reflect preferences for general musical characteristics common to the factors, or whether the factors merely reflect preferences for instrumental versus vocal music.
We addressed this issue by revising our music preference measure to include a balance of instrumental and vocal music excerpts for each of the music genres and subgenres. The same 52 excerpts that were included in the original measure were kept, but for 12 of them that had vocals, we created one excerpt using a section of the piece with vocals and a second excerpt from a section of the same piece that was purely instrumental. The revised measure comprised 64 musical excerpts that were each approximately 15 seconds in length. A total of 75 participants from the original sample volunteered to complete the revised music-preference survey without compensation.
If the five music-preference factors were not an artifact of confounding instrumental and vocal music excerpts, we should expect the same five dimensions to emerge from the revised music-preference measure. And, indeed, the same five factors were recovered in a PCA with varimax rotation, with a structure that was nearly identical to the one derived from the original musical excerpts. Analyses of the correlations between the factor scores derived from the original and the revised excerpts revealed high convergence for all of the factors, with convergent correlations ranging from .61 (Urban) to .82 (Sophisticated). These results indicate that the original factor structure was not an artifact due to the confounding of the unequal numbers of musical excerpts with vocals for each factor. Furthermore, because the follow-up took place 5 months after the original study ended, these results also suggest that the music-preference factors are stable over time.
The findings from Study 1 and its follow-up provide substantial evidence for five music-preference factors. These five factors capture a broad range of musical styles and can be labeled MUSIC, for the Mellow, Urban, Sophisticated, Intense, and Campestral music-preference factors. Three of these factors (Sophisticated, Campestral, and Intense) are similar to factors reported previously (e.g., Delsing, et al., 2008; Rentfrow & Gosling, 2003). On the other hand, previous studies have suggested that preferences for rap, soul, electronica, dance, and R&B music comprise one broad factor, whereas in the current study rap, electronica, and dance music form one factor (Urban) while soul and R&B music comprise another (Mellow). One likely explanation for this difference is that the present research examined a broader array of music genres and subgenres than did most previous research. Moreover, the results from the follow-up study five months later suggest that our music-preference dimensions are reasonably stable over time.
Taken together, the findings from this study are encouraging. However, a potential problem with the current work is that several of the music excerpts used in the music-preference measure were from pieces recorded by famous music artists (e.g., Ludacris, Dean Martin, Oscar Peterson, Ace of Base, Social Distortion). This is potentially problematic because it is likely that some of the excerpts were more familiar to some participants than to others, and several studies (e.g., Brickman, & D’Amato, 1975; Dunn, in press) indicate that familiarity with a piece of music is positively related to liking it. Even if the particular pieces were unfamiliar, listeners may have associations or memories for these particular artists independent of the excerpts themselves. Therefore, it is necessary to confirm the music-preference structure using both artists and music that are unfamiliar to listeners.
The results from Study 1 reveal an interpretable set of music-preference factors that resemble the factors reported in previous research. This is encouraging because it further supports the hypothesis that there is a robust and stable structure underlying music preferences. However, it is conceivable that the factors obtained in Study 1, although consistent with previous research, could be a result of the specific pieces of music administered. In theory, if the five music-preference factors are robust, we should expect to obtain a similar set of factors from an entirely different selection of musical pieces. This is a very conservative hypothesis, but necessary for evaluating the robustness of the MUSIC model.
Therefore, the aim of Study 2 was to investigate the generalizability of the music-preference factor structure across samples as well as musical stimuli. Specifically, an entirely new music-preference stimulus set was created that included only previously unreleased music from unknown, aspiring artists. Because none of the excerpts included in Study 1 were included in Study 2, evidence for the same five music-preference factors would ensure that the structure is not merely an artifact of the particular pieces or artists used in Study 1, thereby providing strong support for the MUSIC model.
In the Spring of 2008, advertisements were placed in several locations on the Internet (e.g., Craigslist.com) inviting people to participate in an Internet-based study. All those who volunteered and provided consent were directed to a website where they could complete a measure of music preferences. A total of 354 people chose to participate in the study. Of those who indicated, 235 (66%) were female and 119 (34%) were male; 11 (3%) were African American, 52 (15%) were Asian, 266 (75%) were Caucasian, 15 (4%) were Hispanic, and 10 (3%) were of other ethnicities. The median age of the participants was 25. After completing the survey, participants received a $5 gift certificate to Amazon.com.
The primary aim of Study 2 was to replicate the MUSIC model using a new set of unfamiliar musical pieces. To obtain unfamiliar pieces of music, we purchased from Getty Images the copyright to several pieces of music that had never been released to the public. Getty Images is a commercial service that provides photographs, films, and music for the advertising and media industries. All materials are of professional-grade in terms of the quality of recording, production, and composition (indeed, they pass through many of the same filters and levels of evaluation that commercially released recordings do).
In the autumn of 2007, five expert judges searched the Getty database (http://www.Getty.com) for pieces of music to represent the same 26 genres and subgenres used in Study 1. The judges worked independently to identify exemplary pieces of music and then pooled their results to reach a consensus on those pieces that were the best prototypes for each category. We sought to obtain four pieces for each category, but for a few (such as World Beat and Celtic) the judges were only able to agree on two or three as to their goodness of fit to the category, and hence the resulting set comprised a total of 94 excerpts. A complete list of the pieces used is shown in Table 2.
As in Study 1, preferences were assessed by asking participants to indicate their degree of their liking for each of 94 musical excerpts using a nine-point rating scale with endpoints at 1 (Not at all) and 9 (Very much).
As in Study 1, multiple criteria were used to decide how many factors to retain. A PCA with varimax rotation yielded a large first factor that accounted for 26% of the variance; parallel analysis of random data suggested that the first seven eigenvalues were greater than chance; and the scree plot suggested an “elbow” at roughly six factors. PCAs with varimax rotation were then performed for one-factor through six-factor solutions. One of the factors in the six-factor solution was comparatively small and included several excerpts with large secondary loadings. Based on those findings, we elected to retain the first five music-preference factors.
Examination of factor invariance across extraction methods again revealed very high convergence across the PCA, PA, and ML extraction methods, with correlations averaging above .999 between the PCA and PA factors, .99 between the PCA and ML factors, and over .999 between the PA and ML factors. Given that the factors were equivalent across extraction methods and that we presented the loadings from the PCAs in Study 1, we again report solutions derived from PCAs in Study 2.
The final five-factor solutions were virtually identical between Study 1 and Study 2, although inspection of the one- through five-factor solutions revealed a slightly different order of emergence in the two studies. As can be seen in Figure 2, the first factor in the two-factor solution was difficult to interpret because it comprised a wide array of musical styles, from classical and soul, to electronica and country. In contrast, the second factor clearly resembled the Intense factor found in Study 1, and remained virtually unchanged through the three-, four-, and five-factor solutions. In the three-factor solution, a factor resembling the Sophisticated dimension emerged, comprising classical, jazz, and world music excerpts. This factor remained in the four- and five-factor solutions. A factor resembling Campestral also emerged in the three-factor solution, and was composed mainly of country and rock-n-roll musical excerpts. The Campestral factor emerged fully in the four- and five-factor solutions. In the four-factor solution a factor comprised primarily of rap, electronic, and soul/R&B music excerpts emerged. This factor split in the five-factor solution into factors closely resembling the Urban and Mellow dimensions. The Urban factor included mainly rap and electronica music and the Mellow factor included predominately pop, soft-rock, and soul/R&B excerpts. The music excerpts and their loadings on each of the five factors are presented in Table 2.
The five music-preference factors that emerged in Study 2 replicate the factors identified in Study 1. This is a particularly impressive finding considering that entirely different excerpts from different pieces and different artists were included in the two studies. However, Studies 1 and 2 share three characteristics that could limit the generalizability of the results. First, both studies were conducted over the Internet. Although there is evidence that the results obtained from Internet-based surveys are similar to those based on paper-and-pencil surveys (Gosling, Vazire, Srivastava, & John, 2004), the stimuli used in the present research were musical excerpts, not text-based items. The contexts in which participants completed the survey were most certainly different, and it is possible that the testing conditions could have affected participants’ ratings. Second, both studies relied on samples of self-selected participants. It is reasonable to suppose that people who responded to the online advertisements about a study on the psychology of music might be more interested in music and/or share other kinds of preferences compared to people who chose not to participate or who did not visit the websites where the advertisements were posted. And third, the music preference question used in both studies was potentially ambiguous. For each music excerpt, participants were asked, “How much do you like this music?” The question was intended to assess participants’ degree of liking for the style of music that the excerpts represented, but it is possible that some participants reported their degree of liking for the excerpt itself. Given these limitations, it is important to know whether the results from Studies 1 and 2 would generalize across other samples and methods.
Study 3 was designed to investigate the generalizability of the music-preference factors across samples and methods. A subset of the music excerpts used in Study 2 was administered to a sample of university students in person. Participants listened to the excerpts in a classroom setting. For each excerpt, half of the participants rated how much they liked that excerpt, and the other half rated how much they liked the genre that the excerpt represented.
In the Fall of 2008, students registered for introductory psychology at the University of Texas at Austin were invited to participate in an in-class survey of music preferences. A total of 817 students chose to participate in the study. Of those who indicated, 488 (62%) were female and 306 (38%) were male; 40 (5%) were African American, 144 (18%) were Asian, 397 (51%) were Caucasian, 171 (22%) were Hispanic, and 28 (4%) were of other ethnicities. The median age of participants was 18.
As part of the curriculum for two introductory psychology courses, which were taught by the same pair of instructors, surveys, questionnaires, and exercises that pertained to the lecture topics were periodically administered to students. A survey about music preferences was administered as part of the lecture unit on personality and individual differences. Students were invited to participate in a study of music preferences, which involved listening to 25 music excerpts and reporting their degree of liking for each one (a complete list of the pieces is shown in Table 3). For each music excerpt, participants in one class were asked to rate how much they liked the excerpt, whereas participants in the other class were asked to rate how much they liked the genre of the music. All the musical excerpts were played entirely and only once.
Due to time constraints and concerns about participant fatigue, a shortened music-preference measure was used in Study 3. Specifically, a subset of 25 of the musical excerpts used in Study 2 was used as stimuli. We tried not to select only excerpts with high factor loadings in Study 2, but excerpts that captured the breadth of the factors. Preferences were measured by asking participants to indicate the degree of their liking for each of the 25 musical excerpts using a five-point rating scale, with endpoints at 1 (Extremely dislike) and 5 (Extremely like). The set of excerpts used can be found in Table 3.
We first examined the equivalence of the music-preference factor structures across test formats (i.e., ratings of excerpt preferences compared to ratings of genre preferences). PCAs with varimax rotation yielded first factors that accounted for 17% and 18% of the variance (excerpt preferences and genre preferences, respectively). For both groups, parallel analyses of randomly selected data suggested that the first five eigenvalues were greater than chance, and the scree plots suggested “elbows” at roughly six factors. PCAs with varimax rotation were performed for one-factor through six-factor solutions for both groups. Examination of factor congruence between the two groups revealed high congruence for the five-factor solution (mean factor congruence = .97), suggesting that the factor structures were equivalent across the two test formats. Based on those findings, we combined the ratings for both groups.
We next conducted a PCA with varimax rotation using the full sample and specified a five-factor solution. As can be seen in Figure 3 and Table 3, the excerpts loading on each of the factors clearly resemble those observed in the previous studies. The first factor included primarily classical, jazz, and world music excerpts and clearly resembled the Sophisticated preference dimension. The second factor replicates the Intense factor, as it is composed entirely of heavy metal, rock, and punk music. The third factor reflects the Urban music-preference factor and includes mainly rap and electronica music excerpts. The fourth factor is composed of predominately soft rock and adult contemporary excerpts and resembles the Mellow dimension. The fifth factor comprises country and rock-n-roll excerpts, thus clearly corresponding to the Campestral factor.
Taken together, the results from all three studies provide compelling evidence that the five MUSIC factors are quite robust: The same factors emerged in three independent studies that used different sampling strategies, methods, musical content, participants, and test formats. Based on these findings, it seems reasonable to conclude that the MUSIC dimensions reflect individual differences in preferences for broad styles of music that share common properties. But what are those properties? What do the styles of music that comprise each music-preference dimension have in common?
The factor loadings reported in Tables 1, ,2,2, and and33 might suggest that the factors can be characterized in terms of musical genres. For example, most of the excerpts with high loadings on the Sophisticated dimension fall within the classical, jazz, or world music genres, and most of the excerpts on the Intense dimension fall in the rock, heavy metal, or punk genres. However, some genres load on more than one music-preference dimension. For instance, jazz is represented on the Sophisticated and the Urban factors, and electronica is represented on the Sophisticated, Urban, and Mellow factors. Thus, the preference factors seem to capture something more than just preferences for genres.
Music varies on a range of features, from tempo, instrumentation, and density, to psychological characteristics like sadness, enthusiasm, and aggression. Although genres are defined in part by an emphasis of certain musical attributes, it is conceivable that individuals have preferences for particular music attributes. For example, some people might prefer sad music to joyful music, regardless of genre, just as other people might prefer instrumental music to vocal music. So it would seem reasonable to ask if our five MUSIC factors reflect preferences for attributes in addition to genres. If we are to develop a complete understanding of the music-preferences, it is necessary that we go beyond the genre and examine more specific features of music.
The objective of Study 4 was to examine those variables that contribute to the structure of musical preferences. Are the factors best understood as simply composites of music from similar genres? Or are the factors the result of preferences for particular musical attributes? To investigate those questions, we analyzed the independent and combined effects of genre preferences and music-related attributes on the MUSIC model.
Differentiating the effects of genre preferences and attributes required that we code the various music pieces investigated in the previous studies for their attributes. We wanted to cover many aspects of music, so we developed a multi-step procedure to create lists of descriptors to describe qualities specific to music (e.g., loud, fast) as well as psychological characteristics of music (e.g., sad, inspiring).
Creating a list of attributes involved two steps. First, we generated sets of music-specific and psychological attributes on which pieces could be judged. The selection procedure started with the set of 25 music-descriptive adjectives reported by Rentfrow and Gosling (2003). Those attributes were derived from a multi-step procedure in which participants independently generated lists of terms that could be used to describe music (for details, see Rentfrow and Gosling, 2003). Some of the attributes in that set were highly related (e.g., depressing/sad, cheerful/happy) or displayed low reliabilities (e.g., rhythmic, clever), so we eliminated redundant attributes (with rs > |.70|) and unreliable attributes (with coefficient alphas < .70).
To increase the range of music attributes, two expert judges supplemented the initial list with a new set of music-descriptive adjectives. Next, two different judges independently evaluated the extent to which each music descriptor could be used to characterize various aspects of music. Specifically, the judges were instructed to eliminate from the list attributes that could not easily be used to describe a piece of music and then to rank order the remaining music attributes in terms of importance. This strategy resulted in seven music-specific attributes: dense, distorted, electric, fast, instrumental, loud, and percussive; and seven psychologically oriented attributes: aggressive, complex, inspiring, intelligent, relaxing, romantic, and sad.
Forty judges, with no formal music training, independently rated the 146 musical excerpts used in Studies 1 and 2 (i.e., 52 excerpts used in Study 1 and the 94 excerpts in Study 2) on each of the 14 attributes. Specifically, 18 judges coded the excerpts used in Study 1 and 30 judges coded those from Study 2. To reduce the impact of fatigue and order effects, the judges coded subsets of the excerpts; no judge rated all of them (the number of judges per song ranged from 6 to 18; mean number of judges per song was 10). Judges were unaware of the purpose of the study and were simply instructed to listen to each excerpt in its entirety, then to rate it on each of the music attributes, using a 9-point scale with endpoints at 1 (Extremely uncharacteristic) and 9 (Extremely characteristic). Our analyses in Studies 1–3 were based on the music preferences of ordinary music listeners, so for this study we were interested in ordinary listeners’ impressions of music (rather than the impressions of trained musicians). Thus, judges were given no specific instructions about what information they should use to make their judgments.
We computed coefficient alphas to assess the reliability of the judges’ attribute ratings. Analyses across all the excerpts revealed high attribute agreement for the music-specific attributes (mean alpha = .93), with the highest agreement for Instrumental (mean alpha = .99) and the lowest agreement for Distorted (mean alpha = .81). Attribute agreement was also high for the psychologically oriented attributes (mean alpha = .83), with the highest agreement for Aggressive (mean alpha = .93) and the lowest agreement for Inspiring (mean alpha = .68). These results suggest that judges perceived similar qualities in the music and generally agreed about the rank ordering of the excerpts on each of the attributes.
To learn more about the nature of the music-preference factors, we examined the musical attributes and genres of the excerpts studied in Studies 1 and 2. Specifically, using musical excerpts as the unit of analysis, we correlated the factor loadings of each excerpt on each MUSIC factor with the mean music-specific attributes, emotion-oriented attributes, and genres of the excerpts. These analyses shed light on the broad and specific qualities that compose each of the MUSIC factors.
As can be seen in Table 4, the MUSIC factors were related to several of the attributes and genres. The results in the first column show the results for the Mellow factor. Musically, the excerpts with high loadings on the Mellow factor were perceived as slow, quiet, and not distorted. Emotionally, the excerpts were perceived as romantic, relaxing, not aggressive, sad, somewhat simple, but intelligent. Mellow was also associated with the soft rock, r&b, quiet storm, and adult contemporary music genres. As can be seen in the second column, the excerpts on the Urban factor were perceived as percussive, electric, and not sad. Moreover, Urban was primarily related to rap, electronica, Latin, acid jazz, and Euro pop styles of music. The results in the third column reveal several associations between the Sophisticated factor and its attributes. Musically, the Sophisticated excerpts were perceived as instrumental, and not electric, percussive, distorted, or loud, and in terms of emotions, they were perceived as intelligent, inspiring, complex, relaxing, romantic, and not aggressive. The genres with the strongest relations with Sophisticated were classical, marching band, avant-garde classical, polka, world beat, traditional jazz, and Celtic. As shown in the fourth column, Intense music was perceived as distorted, loud, electric, percussive, and dense, and also aggressive, not relaxing, romantic, intelligent, nor inspiring. The classic rock, punk, heavy metal, and power pop genres had the strongest relations with Intense. Finally, as can be seen in the fifth column, Campestral music was perceived as not distorted, instrumental, loud, electric, nor fast. In terms of the emotional attributes, the Campestral excerpts were perceived as somewhat romantic, relaxing, sad, and not aggressive, complicated, nor especially intelligent. The musical styles most strongly associated with the Campestral factor were of course subgenres of country music.
These results show clearly that the MUSIC factors have unique musical and emotional features and are comprised of different sets of genres. What accounts for the placement of a piece of music in the MUSIC space? Is it the genres or the attributes?
To determine the extent to which a musical piece’s location within the multi-dimensional MUSIC space was driven by the genre or attributes of the piece, a series of hierarchical regressions were performed on the excerpts. First, five hierarchical regressions were conducted in which the factor loadings of the music excerpts were regressed onto the mean judge attribute ratings at step 1 and the music genres at step 2. These analyses shed light on how much variance in the MUSIC factors is accounted for by music attributes and whether genres add incremental validity.
As can be seen in the top of Table 5, the attributes accounted for significant proportions of variance for each of the MUSIC dimensions, with multiple correlations ranging from .67 for Mellow to .83 for Intense. When the genres were added to the regression models, the amount of explained variance increased significantly for all five of the five music-preference factors. Specifically, adding music genres to the regressions increased the multiple correlations to .96, .93, .93, .90, and .86, for the Intense, Campestral, Sophisticated, Urban, and Mellow factors, respectively. These findings raise the question of whether genres account for more unique variance than do music attributes.
To address that question, another set of five hierarchical regression analyses were performed in which the factor loadings of the music excerpts were regressed onto the music genres at step 1 and then the attributes at step 2. As can be seen in the bottom rows of Table 5, genres also accounted for significant proportions of variance, with multiple correlations ranging from .76 for Mellow to .94 for Intense. However, attributes also appear to account for significant proportions of unique variance, with significant increases in multiple correlations for Mellow, Urban, Intense and Sophisticated (Δ Fs = 4.64, 4.04, 3.41, 2.58, respectively; all ps < .05), and a marginally significant increase for Campestral (Δ F = 1.65, p < .1)
Taken together, these results indicate that the MUSIC factors are not the result of preferences only for genres, but are driven significantly by preferences for certain musical characteristics. This suggests that individuals may be drawn to styles of music that possess certain musical features, regardless of the genre of the music. Although genres accounted for more variance in the MUSIC model than did attributes, it should be noted that there were more genres (26) than attributes (14) in the regression analyses, and that with more predictors in a multiple regression model, the higher should be the resulting multiple correlation.
The present research replicates and extends previous work on individual differences in music-genre preferences (e.g., Delsing et al., 2008; Rentfrow & Gosling, 2003), which suggested four to five robust music-preference factors. We examined a broad array of musical styles and assessed preferences for several pieces of music. The results from three independent studies converged, revealing five dimensions underlying music preferences. Although the pieces of music used in Study 1 were completely different from those used in Studies 2 and 3, the findings from all three studies revealed five clear and interpretable music-preference dimensions: a Mellow factor comprising smooth and relaxing musical styles; an Urban factor defined largely by rhythmic and percussive music; a Sophisticated factor composed of a variety of music perceived as complex, intelligent, and inspiring; an Intense factor defined by loud, forceful, and energetic music; and a Campestral factor comprising a variety of different styles of country and singer-songwriter music. Each of these factors resemble those reported previously, and the high degree of convergence across the present studies and previous research suggests that music preferences, whether for genres of musical pieces, are defined by five latent factors.
The findings from Study 4 extend past research by informing our understanding of why particular musical styles covary. Indeed, we found that each factor has a unique pattern of attributes that differentiates it from the other factors. For instance, Sophisticated music is perceived as thoughtful, complicated, clear sounding, quiet, relaxing and inspiring, whereas Mellow music is perceived as thoughtful, clear sounding, quiet, relaxing, slow, and not complicated. The results from this study also suggest that preferences for the MUSIC factors are affected by both the social and auditory characteristics of the music. Specifically, musical attributes accounted for significant proportions of variance in preferences for the Mellow, Urban, Sophisticated, Intense, and Campestral music factors, over and above music genres. These results suggest that preferences are influenced by both the social connotations and by particular auditory features of music.
The present work provides a solid basis from which to examine a variety of important research questions. For example: Do the MUSIC factors reveal anything about the nature of music preferences? How do music preferences develop and how stable are they across the lifespan? Are the music-preference factors culturally specific? How do people use music in their daily lives?
The present research replicates previous research concerned with music preferences by showing that there is a basic structure underlying music preferences and extends that work by showing that the structure is not dependent entirely upon music-genre preferences. Indeed, we found that musical pieces from the same genre have their primary loadings on different factors and that the MUSIC factors comprise unique combinations of music attributes. This raises a question about the nature of music preferences: Are people drawn to a particular style of music (e.g., jazz, punk) because of the social connotations attached to it (creativity, aggression)? Or are people attracted to specific qualities of the music (e.g., dynamic, intense)?
If preferences are influenced strongly by the social connotations of music, as research on music stereotypes suggests (Rentfrow & Gosling, 2007; Rentfrow et al., 2009), then one should not expect musical pieces from the same genre to load on different factors, for which there was some evidence in all three studies. However, if preferences are the result of liking certain configurations of musical attributes, then we should expect the MUSIC model to emerge in a heterogeneous selection of musical pieces from the same genre. It is conceivable that there exist pieces of music within a single genre that possess the various combinations of musical attributes that would yield a set of factors that resemble the MUSIC model. Rock, classical, and jazz, for instance, are broad genres that comprise wide varieties of musical styles and subgenres. Future research could explore the factor structures of preferences for pieces of music within such genres. Evidence for a similar five-factor model would suggest that music preferences are driven by specific features of music, not their social connotations.
Future research should also examine a broader array of musical attributes. Most of the music-specific attributes we examined relate to timbre. Timbre refers to tone quality and comprises several more specific characteristics, which the attributes we used do not fully reflect. For instance, it would be informative to code musical pieces for different instrumental families (e.g., strings, brass, woodwinds, synthesizers, etc.) to gain even more precise information about the nature of the preference factors. In addition, there are also acoustical parameters (i.e., pitch, rhythm), which our attributes do not directly tap, that reflect the grammar or syntax of music. These properties are critical and differentiate one piece of music from another. Thus, future research may also code for melodic attributes such as melodic range (e.g., high, medium, or low) and melodic motion (e.g., wide vs. restricted range), as well as harmonic attributes (e.g., dissonant/harsh vs. consonant/sweet, diatonic vs. chromatic, and static vs. active).
These findings also have implications for work on music recommendation services (e.g., Pandora.com, Last.fm). The results from this and previous studies clearly suggest there is some stability to the structure of music, or which musical pieces go with other pieces. It seems that one of the ultimate goals of a music recommendation system is to characterize an individual’s musical preferences using an equation. Such an equation would include a number of parameters, like age of the listener, gender, education, and income, as well as the music preferences of the listener, which could include a score on each of the five MUSIC factors. There might be other parameters too, such as the time of day (presumably people like different music when the wake up versus when going to sleep), and the mood of the listener. Taken together with such potential other parameters, the MUSIC model might prove to be a part of improving music recommendation software: the MUSIC factors may capture the latent structure of individual music preferences better than traditional genre labels. Thus, future research could evaluate the efficacy of the MUSIC model in predicting which pieces of music individuals like and which ones they dislike.
It seems reasonable to suppose that music preferences are shaped by psychological dispositions, social interactions as well as exposure to popular media and cultural trends. Thus, preferences for a particular style of music may vary as a function of personality traits, social class, ethnicity, country of residence, and cohort, as well as the culture-specific associations with that style of music. However, the reliance on genre-based preference measures makes it difficult to examine music preferences among people from different generations and cultures because their knowledge and familiarity with the genres will vary significantly. The present findings suggest that audio recordings of music can be used effectively to study music preferences. This finding should help pave the way for future research by enabling researchers to develop music-preference measures that are not language based and can therefore be administered to individuals of different age groups, social classes, and cultures. Audio-based music preference measures that include musical excerpts from a wide array of genres, time periods, and cultures will help researchers further explore the structure of music-preferences and ascertain whether the MUSIC model is universal.
In the meantime, the MUSIC model provides a useful framework for conceptualizing and measuring music preferences across the life course. Future research is well positioned to examine some very important issues, including whether the MUSIC factors emerge in different age groups, whether individual differences in preferences for the MUSIC factors change throughout life, and whether social and psychological variables differentially affect music preferences over time.
The social connotations of particular musical styles are shaped by culture and society, and those connotations change over time. For example, jazz music now means something very different than it did 100 years ago; whereas jazz is currently thought of as sophisticated and creative, earlier generations considered it uncivilized and lewd. This raises questions about the stability of the MUSIC model across generations. Are the factors cohort- and culture- specific, or do they transcend space and time?
It is tempting to suppose that the structure of music preferences may be more stable and enduring than the genres that are included in any period of time because styles of music come and go, their cultural relevance and popularity fluctuate, and consequently, their social connotations change. Yet, it is conceivable that there has been, and will continue to be, a Sophisticated music-preference factor that includes complex and cerebral music, but the genres that comprise that factor change over time. Perhaps there will also continue to be factors of music preferences that are Mellow, Urban, Intense, and Campestral, but the genres that comprise those factors may change as their social connotations change. If so, then it is possible that the links between the MUSIC factors and personality may be stable across generations.
Much of the research concerned with music preferences has focused on questions pertaining to its structure and external correlates; very few studies have actually examined the contexts in which people listen to music and the particular music they listen to. As a result, most of the research in this area conceptualizes preferences as trait-like constructs and assume that preferences reflect the types of music people listen to most of the time. However, as Sloboda and O’Neill (2001) noted, music is always heard in context, so it is necessary to consider contextual forces and state-preferences in addition to trait-preferences. Indeed, trait variables necessarily interact with specific situations and a type of fundamental attribution error (Ross, 1977) may be at work in judgments about music preferences. Weddings, funerals, sporting events or relaxation, for example, constrain musical choices, and individual preferences operate within those constraints. One may prefer a particular piece or style of music (e.g., Chopin’s Polonaises) in a particular context (at home reading leisurely) but never want to hear it in another context (during a Pilates workout). A complete theory of musical preferences must necessarily focus on the functions of music, and reflect situational constraints in interaction with personality traits.
A growing body of research has begun to identify some of the social psychological processes and roles of the environment that link people to their music preferences. For instance, in a study in which different styles of new-age music were played (low, moderate, and highly complex) in a dining area, participants reported preferring low and moderately complex music (North & Hargreaves, 1996a). Further, when individuals were in unpleasant arousal-provoking situations (e.g., driving in busy traffic), they preferred relaxing music, whereas in pleasant arousal-provoking situations (e.g., exercising), they preferred stimulating music (North & Hargreaves, 1996b; 1997). Thus, it would appear as though music preferences are, to some degree, moderated by situational goals.
Further exploration of music preferences in context should consider the emotional state of the individual prior to listening to music. Numerous studies have shown that music can elicit certain emotional reactions in listeners (see Scherer & Zentner, 2001) but there is considerably less information about how mood might influence our music selections or how we respond to the music that we hear. For instance, do people in a sad mood prefer listening to happy music in order to change their mood? Or do they prefer listening to mood-consistent music? Or do some kinds of individuals prefer one of those and others prefer the other?
One potentially fruitful direction would be to expand research on music attributes to focus more on the affective aspects of music preferences. It is obvious that in any one genre there are a variety of different moods expressed in the music; even one album could run the gamut of emotions. Thus, future research could further examine individual differences in preferences for musical attributes and whether certain attributes are preferred more in some situations over others.
It goes without saying that music is important to people. Curiously, however, we know very little about why it is so important. To shed some light on this issue, we need a sturdy framework for conceptualizing and measuring musical preferences. The present research provides a foundation on which to develop such a framework. Future research can build on this foundation by including a wider array of music from various genres and exploring music preferences across generations, cultures, and social contexts. Such work will serve to inform our understanding of the nature of music preferences and its importance in people’s lives.
This research was funded by Grant AG20048 from the National Institute on Aging, National Institutes of Health, U.S. Public Health Service to LRG; and by Grant 228175-09 from the National Science and Engineering Research Council of Canada, and a Grant from Google to DJL. Funds for the collection of data from the Internet samples used in Studies 1 and 2 were generously provided by Signal Patterns. We thank Samuel Gosling and James Pennebaker for collecting the data reported in Study 3, to Chris Arthun for preparing the figures, and to Bianca Levy for assisting with stimulus preparation, subject recruitment, and data collection. We are also extremely grateful to Samuel Gosling, Justin London, Elizabeth Margulis and Gerard Saucier for providing helpful comments on an earlier draft of this report.
Publisher's Disclaimer: The following manuscript is the final accepted manuscript. It has not been subjected to the final copyediting, fact-checking, and proofreading required for formal publication. It is not the definitive, publisher-authenticated version. The American Psychological Association and its Council of Editors disclaim any responsibility or liabilities for errors or omissions of this manuscript version, any version derived from this manuscript by NIH, or other third parties. The published version is available at www.apa.org/pubs/journals/psp
Peter J. Rentfrow, Department of Social and Developmental Psychology, Faculty of Politics, Psychology, Sociology and International Studies, University of Cambridge, Free School Lane, Cambridge CB2 3RQ, United Kingdom.
Lewis R. Goldberg, Oregon Research Institute, 1715 Franklin Blvd., Eugene, OR 97403-1983, USA.
Daniel J. Levitin, Department of Psychology, McGill University, 1205 Avenue Penfield, Montreal, QC H3A 1B1 Canada.