|Home | About | Journals | Submit | Contact Us | Français|
Structural and functional asymmetries are present in many regions of the human brain responsible for motor control, sensory and cognitive functions and communication. Here, we focus on hemispheric asymmetries underlying the domain of social perception, broadly conceived as the analysis of information about other individuals based on acoustic, visual and chemical signals. By means of these cues the brain establishes the border between ‘self’ and ‘other’, and interprets the surrounding social world in terms of the physical and behavioural characteristics of conspecifics essential for impression formation and for creating bonds and relationships. We show that, considered from the standpoint of single- and multi-modal sensory analysis, the neural substrates of the perception of voices, faces, gestures, smells and pheromones, as evidenced by modern neuroimaging techniques, are characterized by a general pattern of right-hemispheric functional asymmetry that might benefit from other aspects of hemispheric lateralization rather than constituting a true specialization for social information.
The new field of human social neuroscience is investigating the neural correlates of social perception and cognition at an incredibly fast pace (Cacioppo & Berntson 2002; Adolphs 2003; Heatherton et al. 2004). Studying the brain mechanisms that underlie the perception and representation of others has become one of the most intriguing topics at the border of psychology and the neurosciences, due not only to the fact that it has become easier to apply modern imaging techniques to stimuli related to the social dimension, but also because it is of crucial relevance to study how the brain manages the rich and complex information provided by the social environment. Many cues are exploited to decode the identity and characteristics of other individuals, in order to interact with them on the spot or to store them in memory, in view of future interactions. These cues are principally conveyed by the visual, auditory and olfactory modalities. Moreover, there is ample evidence that visual, auditory and olfactory cues interact cross modally, forming integrated person perceptions (Kovács et al. 2004; Platek et al. 2004; Campanella & Belin 2007). Touch could be listed as another source of social information, but despite its non-irrelevant involvement in social perception and interaction, it has received comparatively less attention (but see Bufalari et al. (2007) and Dunbar (in press) for recent theoretical and empirical work). Even more dramatically, the mere idea of gustatory inputs to social perception might be rejected for it contravenes human moral principles connected to contamination (Rozin et al. 1994). Both touch and taste are in fact modalities that clearly need the closest of contacts to the source of stimulation, whereas vision, hearing and smell demand a lesser degree of personal contact and are thus more ubiquitous in everyday social interaction.
The range of cues conveyed by the visual modality is very wide, and at a first stage this involves the decoding of characteristics that are present in the physical features of faces and bodies, important to decide about others' sex, age, ethnicity, health state, attractiveness and of course individual identity. At a subsequent stage, postures, gestures, facial expressions, gaze and behavioural patterns at various time-scales permit a person to decode others' psychological states, emotions, direction of attention and behavioural intentions. Voice can also be very informative about others' characteristics. Although it is difficult to sharply separate the information content of spoken language from its surface level, voice can convey social cues comparable with those that we can gather through the visual modality. Finally, personal odours and pheromones are bodily produced, airborne substances acting through the olfactory system in ways that influence individuals' responses to other individuals. Despite the controversies concerning the nature of pheromone physiology and its genuine relationship to olfaction, it can be safely maintained that pheromones are, together with visual and vocal features, important carriers of information about other individuals.
The other character of this review, hemispheric lateralization, in some sense has always had to do with the social dimension. Language, which is structurally and functionally a left-hemispheric function in the great majority of right-handers, can be considered the foremost social function in humans. Undoubtedly, Broca's and Wernicke's areas (the two main cortical areas responsible for processing language) are asymmetrical (at both the macro- and microscopic level of their anatomical organization, as well as functionally) and their discovery sharply marks the beginning of the history of scientific studies on brain asymmetries. Even though a function attributable to the right hemisphere has never been demonstrated with as undisputed clarity as in the case of the attribution of language to the left hemisphere, spatial cognition and the processing of emotions can be considered the two most robust right-hemispheric functions demonstrated so far. It appears clear that social perception based on non-verbal cues would depend mostly on the right hemisphere, as the left is immediately ruled out of the story due to its major implication in linguistic processing. However, given that language is the instrument of social exchange among individuals, the brain in its entirety can be considered a ‘social engine’, mediating verbal and non-verbal signals by means of the interaction among the two hemispheres. Explanations of hemispheric asymmetries have also been proposed based on the computational advantages that follow from an asymmetric subdivision of tasks, disregarding the specific attribution of one or another function to the hemispheres. Tests of computational efficiency, based on the assumption that inter-hemispheric communication is slower and more costly than intra-hemispheric communication (due to the necessary transmission through the fibres of the corpus callosum), among other things, have concluded that stimulus complexity, as well as task concurrency, are two likely factors that favoured the evolution of segregated specialization in the two hemispheres. Social information (for instance the categorization of sex as assessed by the perception of facial features, vocal pitch and contour and smell) is in a way a puzzling domain, because at first sight it is a complex type of information (it is certainly so in the everyday glossary of the neuroscientist), if compared with the simplified stimuli generally used to assess computational efficiency of the hemispheres, which have consisted of digits, letters or simple spatial patterns (e.g. Banich 2003). However, the ease and the speed at which individual recognition or social judgement takes place, which are of course justified by the relevance of these activities for social interaction, seem to imply that this type of information must be accessed with high priority by the human brain. From this simple assumption stems the hypothesis that, although demanding a high degree of inter-hemispheric exchange in order to relay linguistic information, social information would not benefit so much if it were localized only in one hemisphere, rather it seems that either a bihemispheric or a distributed network for social perception would be the most advantageous solution.
Here, we provide a non-exhaustive, but tentatively extensive, review of the literature concerning the issue of hemispheric lateralization of social perception. Of course, there is more research on social perception and lateralization as independent fields than any single review could ever accommodate (and there are good reviews of both fields), but we felt that the intersection among these two topics is a relatively empty domain demanding closer attention. We believe that starting with the lateralization of voice perception is a way to acknowledge the fact that the human voice can hardly undergo processes of artificial transformation (as in the case of visual appearance with clothing/make-up or olfactory presence with perfumes) and can thus be considered the most honest of human social signals; the visual modality will be reviewed for what concerns the specific issues of perception of faces, gaze, biological motion and gestures; lateralization of olfaction, and in particular of the brain substrates of pheromone processing, will conclude the review.
Voices are probably the most important social sounds of the human auditory scene. Humans may spend more time listening to voices than to any other sound, and their ability to analyse information contained in voice sounds is of basic importance in social interactions. It has been shown that many regions of the auditory cortex exhibit a greater response to vocal compared with non-vocal sounds, whereas no part of the same cortex shows the reverse pattern (Belin et al. 2000).
Human voice is the carrier of speech, which appeared relatively recently in the evolutionary history of primates as a particularly composite use of voice by humans (Hauser 1996; Fitch 2000). However, vocalizations were present in the auditory environment of vertebrates for very long time before speech emerged, and perceiving (non-verbal) information contained in vocalizations in an appropriate way has been since then of crucial importance for survival. Thus, voices carry additional information other than speech, and humans, such as many other species, are capable of extracting that information from voices. When speech cues are not contained in a voice, such as in a laugh, in a cry, in a tune or because the vocal information is distorted or damaged, we are still able to extract information about the sex, the age and the affective state of the subject and even about her/his personal identity (Belin et al. 2004).
The abilities involved in perceiving non-linguistic information in voices have been far less investigated than speech perception, and little is known about their neural bases. Recent findings, however, suggest that there are voice-specific areas in the human (Belin et al. 2000) and macaque (Poremba 2006; Petkov et al. 2008) brain and that these areas are organized in an asymmetrical way. The different types of vocal information are processed in partially dissociated functional pathways and the parameters of voice, such as pitch, loudness, spectral content, amplitude envelope, formants, prosody, and accent seem to have specific neural modules dedicated to their analysis which are often lateralized. Functional lateralization of voice or vocalization perception has been indexed by some behavioural and neuroimaging studies in both human and non-human primates. It should be considered that some of these studies have been carried out under the major hypothesis that investigating neural mechanisms underlying voice perception might provide useful information concerning the evolutionary history of language. Thus, the role of voice has often remained in the background.
With the aim of investigating ear preferences in the perception of calls, Petersen et al. (1978) trained Japanese macaques (Macaca fuscata, an Old World monkey) to discriminate between conspecific calls. When communicatively relevant information was the key feature to be discriminated, authors found that the five Japanese macaques they tested showed a better performance when the stimuli were presented to the right when compared with the left ear. This result was not replicated in a control experiment, in which calls had to be discriminated by animals from other monkey species. Conversely, when Japanese macaques were trained to perform the discrimination on the basis of acoustical features of the stimuli, such as pitch, they showed either no advantage or even a left-ear advantage. The fact that in this study no right-ear advantage was observed when the same sounds were discriminated by pitch, could be interpreted as suggesting that only the communicatively relevant features of the call might engage lateralized processes. Alternatively, different features of the same call could be processed using partially distinct, differentially lateralized neural networks, as seems to be the case in humans. One following study from the same group confirmed the presence of a right-ear advantage in the discrimination of intraspecific but not interspecific calls (Petersen et al. 1984). The lack of lateralization in a control primate species in these studies is particularly interesting, and would suggest that these lateralized processes are observed only for conspecific calls.
Another method used to measure functional asymmetries in non-human primates involves unilateral cortical lesions. Heffner & Heffner (1984, 1986) performed unilateral lesions in the superior temporal gyrus of macaques (entailing primary as well as secondary auditory cortices) and measured the effects of the lesions on performance in call discrimination according to whether the lesion had been performed in the left hemisphere (five animals) or right hemisphere (five animals). A striking pattern of lateralization emerged: the animals having received a lesion in the right hemisphere showed no noticeable deficit within one week from the lesion, whereas the animals with a lesion in the left hemisphere showed a marked initial deficit followed by a progressive recovery over the successive days. A second lesion to the remaining auditory cortex of the other hemisphere then completely abolished the ability to discriminate the calls.
More recently, Gil-da-Costa & Hauser (2006) conducted an experiment on vervet monkeys (Cercopithecus aethiops, an Old World monkey) using a head-orienting procedure, and found a strong left ear bias for both familiar and unfamiliar conspecific vocalizations, whereas no asymmetry was found for other primate vocalizations or non-biological sounds. This finding raises significant questions about how ontogenetic and evolutionary forces have impacted on primate brain evolution, and suggests that although auditory asymmetries for processing species-specific vocalizations are a common feature of the primate brain, the direction of this asymmetry may be relatively plastic. Further head-orienting experiments in field studies yielded additional useful information on the cerebral lateralization of the processing of calls in non-human primates. Hauser & Andersson (1994) monitored the orienting response to conspecific calls in a large number of rhesus macaques. The majority of adult macaques were found to orient to the sound source by turning their head to the right, thus seeking to increase sound amplitude in the right ear, preferentially connected to the left hemisphere. Conversely, they tended to present the left ear to the source when a familiar, but heterospecific (i.e. from another species) alarm call was played. Infants tested using the same paradigm failed to show any head-turning preference. The authors interpreted this finding as evidence for left-biased cerebral lateralization for processing conspecific calls in the rhesus macaque, as for human speech, but only once a certain stage of maturation is reached. Further studies using the same paradigm but acoustically modified calls showed that temporal modifications such as expansion or contraction (Hauser et al. 1998) or temporal inversion (Ghazanfar et al. 2001) could eliminate or reverse the right-ear advantage.
In humans, speech processing engages left-lateralized networks in most right-handed subjects (Broca 1861; Price 2000), but processing of pitch timbre or identity from the same vocal input can reverse this pattern and yields a right-hemispheric advantage (Zatorre et al. 1992a; Brancucci & San Martini 1999, 2003; Von Kriegstein et al. 2003; Brancucci et al. 2005). Koeda et al. (2006), aiming at clarifying the role of voice perception in language dominance for lexical and semantic processing, scanned 30 healthy right-handed subjects by functional magnetic resonance imaging (fMRI) while listening to sentences, reversed sentences and identifiable non-vocal sounds. They found a right-lateralized activation in the anterior temporal cortices in the contrast ‘reversed sentences minus non-vocal sounds’. Conversely, both contrasts, ‘sentences minus non-vocal sounds’ and ‘sentences minus reversed sentences’ showed left-lateralized activation in the inferior and middle frontal gyrus and middle temporal gyrus. Of note, the contrast ‘sentences minus reversed sentences’ without the influence of human voice perception showed no activation in the right temporal hemisphere. These results suggest that the influence of human voice perception should be adequately taken into account when language dominance is determined and point to the presence of a prominent right-lateralized activation for human voice perception. In a further fMRI study the same group (Koeda et al. 2007) demonstrated that, within right-handers, the degree of handedness does not influence the magnitude of the right-hemispheric bias for voice perception. Lattner et al. (2005) examined the neural processing of voice information by using event-related fMRI. They controlled the role of the major acoustic parameters as well as of the sex of both listeners and speakers. Male and female, natural and manipulated voices were presented to 16 young adults who were asked to judge the naturalness of each voice. The activations were generally stronger in response to female voices as well as to manipulated voice signals, whereas the influence of listener's sex was negligible. The results showed that voice pitch is processed in regions close and anterior to the Heschl's gyrus, with a bias towards the right hemisphere. Voice spectral information was observed to be processed in the posterior parts of the superior temporal gyrus and in the areas surrounding the planum parietale bilaterally. Finally, a prominent role of the anterior parts of the right superior temporal gyrus was observed for the processing of voice naturalness. This study demonstrates again the fundamental role of the human right hemisphere in voice processing. Belin et al. (2000) used fMRI to measure brain activity during passive stimulation with a large variety of natural sounds grouped in blocks of either vocal or non-vocal sounds. The cortical regions showing the highest selectivity for voice were consistently located along the superior bank of the superior temporal sulcus (STS) with a prevalence for right-sided activation. Moreover, with stimuli degraded by frequency filtering, the activity of those areas reflected subjects' behavioural performance. Authors suggest that the voice-selective areas in the upper bank of the STS may represent the counterpart of the face-selective areas in human visual cortex, pointing to the existence of cortical regions selective to voice sounds that would be similar to the known face-specific areas. In a study using dichotic listening to measure lateralization of voice recognition abilities in normal subjects (Kreiman & Van Lancker 1988), listeners had to identify both the speaker (a famous male) and the word pronounced on each trial. The voice identification task resulted in a zero ear advantage, which differed significantly from the right-ear advantage found for word identification. This result suggests that voice and word information, although carried in the same auditory signal, engage different cerebral mechanisms and different degrees of lateralization. A further dichotic study aimed at investigating ear asymmetries in the recognition of unfamiliar voices (Riley & Sackeim 1982), extended evidence of the right-hemispheric superiority to this type of stimulus.
Fewer studies have used neuroimaging techniques to investigate the perception of affective information contained in voice. In these works, brain activity was measured during stimulation with speech stimuli in which prosody was manipulated in order to generate various emotional states. Studies using positron emission tomography (PET; George et al. 1996) or fMRI (Wildgruber et al. 2002; Mitchell et al. 2003) generally emphasized the greater activation of the right temporal lobe and right inferior prefrontal cortex when attention was directed to emotional prosody, confirming earlier clinical work (Ross 1981; Heilman et al. 1984). More recently, the neural bases of emotional perception in voice have been studied outside the context of speech by using affective non-verbal vocalizations such as laughs, cries, groans and other more primitive vocal expressions of emotion. Until now, PET (Phillips et al. 1998; Morris et al. 1999) and fMRI (Sander & Scheich 2001) studies did not show consistent asymmetries in the perception of pure (non-verbal) emotional processing.
Relatively little is known about the neuronal bases of speaker identity perception and recognition. Some clinical studies have documented cases of brain-lesioned patients with a deficit in speaker discrimination or recognition (Van Lancker & Kreiman 1987; Peretz et al. 1994). These studies generally show that deficits in discriminating unfamiliar speakers or deficits in the recognition of familiar speakers (‘phonagnosia’) can be dissociated, but both seem to occur more often after lesions in the right hemisphere. Importantly, a double dissociation between speech perception and speaker recognition has been demonstrated by cases of preserved speech perception but impaired speaker recognition, as well as cases of aphasia with normal voice recognition (Assal et al. 1981). This supports a model of the organization of voice processing in which speech and identity information are processed in partially dissociated cortical regions. Francis & Driscoll (2006) trained their subjects to use voice onset time (VOT) to cue speaker identity. Successful learners showed shorter response times for stimuli presented only to the left ear than for those presented only to the right. The development of a right hemisphere specialization for processing a prototypically phonetic cue such as VOT supports a lateralization model in which the functional demand drives the side of processing in addition to stimulus properties (Simos et al. 1997).
Again, only few neuroimaging studies have investigated the perception of identity information. Imaizumi et al. (1997) were the first to use PET to examine patterns of cerebral activity induced by speaker identification. Subjects were scanned while performing a forced-choice identification of either the speaker or the emotion in non-emotional words pronounced by four actors with four different emotional tones. They found that, in both hemispheres, the anterior temporal lobes were more active during speaker identification than during emotion identification. In a subsequent study (Nakamura et al. 2001), the same group scanned normal volunteers with PET while they performed a familiar/unfamiliar decision task on voices from unknown persons and from their friends and relatives. A comparison task consisted of deciding whether the first phoneme of sentences pronounced by unfamiliar speakers was a vowel or a consonant. The results showed that several cortical regions, including the enthorinal cortex and the anterior part of the right temporal lobe, were more active during the voice familiarity task. Interestingly, the amount of activity in the right anterior temporal pole was found to correlate positively with the subjects' performance at a speaker identification task administered just after scanning. Von Kriegstein et al. (2003) used fMRI to measure brain activation during identification tasks directed to either the speaker's voice or the verbal content of sentences in German. They found that the right anterior STS and a part of the right precuneus were principally active when the identification task was focused on speaker's identity, whereas left middle temporal regions showed enhanced activity more related to verbal/semantic processing. Thus, even if the vocal stimuli were similar in the two conditions, directing attention to vocal identity or speech content was found to modulate lateralized activity in the STS regions. A convergent finding was obtained by Belin & Zatorre (2003) in a fMRI study with an opposite design. In this experiment, two conditions shared a common passive listening task but blocks of vocal stimulation were composed either of the same syllable spoken by 12 different speakers, or of 12 syllables spoken by the same speaker—thus repeating either speaker or syllable. Only one region of the auditory cortex, in the right anterior STS, showed reduced activity when different syllables were pronounced by the same voice when compared with different voices pronouncing the same syllable. This reduced response to the same voice was interpreted as an adaptation response by neuronal populations sensitive to idiosyncratic acoustic features of a speaker's voice. Thus, there is clear converging evidence for an important role of anterior temporal lobe regions of the right hemisphere, particularly right anterior STS regions, in processing information related to speaker identity. This is consistent with recent models of the organization of the primate auditory cortex (Kaas & Hackett 1999; Rauschecker & Tian 2000) in which a ventral ‘what’ stream, homologous to the similar pathway in the visual system (Ungerleider & Haxby 1994), would be specialized in the recognition of auditory objects, and in particular of individual voices. Note that the STS is a long heterogeneous structure: cytoarchitectonic and connectivity studies in the rhesus monkey have demonstrated a division of this area into several uni- or polymodal areas organized in a precise sequence of connections with one another and with other regions of the cortex (Seltzer & Pandya 1989). Thus, the various STS activations observed in neuroimaging studies on different cognitive processes probably correspond to several functionally distinct regions.
Recent electroencephalography (EEG) and magnetoencephalography (MEG) studies also addressed the question of voice selectivity. Levy et al. (2001, 2003) used event-related potentials (ERPs) to compare the response evoked by sung voices and tones played by different musical instruments. No difference between the voices and instruments was observed for the N1 component (i.e. the first negative response recorded with the EEG or MEG 100ms after the onset of the sensory stimulus, reflecting early cortical processing)—a result that was replicated with MEG (Gunji et al. 2003). Conversely, in the cited EEG (but not MEG) studies, a ‘voice-specific response’ could be observed peaking at approximately 320ms after stimulus onset and stronger on the right side. They suggested that this component, different from the ‘novelty P300’ (i.e. a positive response recorded with the EEG approximately 300ms after the onset of the sensory stimulus, reflecting middle–late cortical processing), might reflect allocation of attention related to the salience of voice stimuli.
Finally, lateralization of the processes underlying one's own voice perception has also been investigated. One recent study consisting of a series of three experiments (Rosa et al. 2008), investigated functional asymmetries related to self-recognition in the domain of voices. Participants were presented with self, familiar or unknown voices as well as with morphed voices and were asked to perform a forced-choice decision on speaker identity with either the left or the right hand. In accordance with a previous intracranial recording study showing that neuronal responses to the subject's own voice in the dominant and non-dominant temporal lobe were about equally affected by overt speech (Creutzfeldt et al. 1989), this study did not reveal strong laterality effects—except a slight right bias for self-recognition, similar to that observed in the visual domain for the recognition of one's own face.
To summarize, available results on lateralization for the processing of purely vocal information, i.e. deprived of linguistic cues, show that the brain areas involved in such analysis are located mainly in the upper part of the temporal lobes (STS), and that the balance between the two sides of the brain leans towards right. This claim is based on behavioural studies in non-human and human primates as well as in neuroimaging studies on humans. The competition between the left and right hemisphere in the processing of voice can be viewed as a parallel process in which individual neural populations are devoted to the analysis of single physical aspects of voice. This perspective is based on the current view of hemispheric specialization that is structured in a parameter-specific rather than a domain-specific fashion (Zatorre et al. 2002). According to the parameter-specific hypothesis, the classical domain-related dichotomy (speech left versus non-speech right) changed to a physical dichotomy which assigns a better temporal resolution to the left auditory cortex and a better spectral resolution to the right auditory cortex (Zatorre 2003; Tallal & Gaab 2006; Hickok & Poeppel 2007; Brancucci et al. 2008). Concerning the perception of voice, the physical dichotomy would mean that those cues contained in natural voices which need high temporal resolution to be properly analysed (i.e. principally language) are mainly processed in left-hemispherical areas, whereas those cues which do not need a high temporal resolution or which need a fine spectral resolution to be properly analysed are processed mainly in right-hemispheric areas.
Faces are certainly the most relevant ‘social objects’ in the visual domain: it would not be an overstatement to say that faces are the most important objects of social perception altogether. By perceiving faces we assign individuals precise characteristics that define their individuality, from the inclusion into classes and categories (male or female, old or young, black or white) to the assessment of their attractiveness, fitness, mood and emotional tone. Sometimes this process brings a long-term encoding of characteristics of a given face in memory and to association with a name for future retrieval. Faces are thus a special domain in our social world, and the brain mechanisms underlying their perception and representation reflects such special status.
Structurally, faces are stimuli that resemble one another for their ‘faceness’: discounting the enormous variance between faces, and extracting the attributes common across all possible faces is necessary to categorize a face as such and not as a mobile phone or a spider. A basic configuration of facial fundamental features and their spatial relationships has to be necessarily present in the brain if rapid categorization is to be carried out to discriminate faces from other biological and non-biological objects (Tsao & Livingstone 2008). This is typically shown by the demonstration that face recognition is made almost impossible when faces are presented upside down, the so-called ‘face inversion effect’ (Yin 1969). Further aspects of shape and microstructure in the face space must be extracted for rapid categorization of sex, ethnicity, age, attractiveness, emotion and even subtler cues such as health state as revealed by texture of skin, eyes and lips (Bruce & Young 1986; Brown & Perrett 1993; Rhodes 2006). These are characteristics that can be shared by large subsets of faces, although their constancy might vary in time from very stable even in the range of a lifetime (such as sex and ethnicity), to extremely variable even in the range of fractions of a second (such as emotional expressions). The temporal stability of features revealing age, for instance, holds on a time-scale long enough to allow for a stable interpretation of identity during months or years. Other characteristics, such health state or attractiveness might vary at an intermediate temporal scale, depending on season (and susceptibility to illness), metabolism and oestrous in the case of women. Even more crucially, individual recognition must be based upon the precise encoding of absolute features present on a given face, because approximate or fuzzy encoding would make undesirable recognition errors possible. Accurate individual recognition, moreover, must discount the enormous variability present within the single face at different moments in time and from different points of view in space. The history of studies on face perception and its neural substrates is long and still very lively. Faces are objects of experience for which dedicated neural machinery exists, are processed automatically, and are preferentially looked at rather than any other category of objects since very early in development (Tsao & Livingstone 2008).
Demonstrations that face perception is a matter of right-hemispheric specialization came relatively long ago from studies using chimeric faces (stimuli obtained juxtaposing the left and right halves of different faces), from studies using the divided visual field technique, from neuropsychological evidence on brain-lesioned patients with selective impairments in face recognition (prosopagnosia), and from work on split brain patients. Among the first demonstrations of asymmetries in face perception are the investigations of Wolff (1933) who observed that the right half of a face, more than the left half, carries the impression conveyed by the full face: to come to this conclusion he compared the impression evoked by chimeric faces created adjoining the left (or right) halves of faces to their mirror images, noticing that R+R chimeras resembled more the original faces than L+L chimeras. The right side of a face falls to the left of its observer, and the possibility that this effect was not due to structural differences in the two halves of faces, but rather to differences between the two halves of the perceptual space of observers was later demonstrated by Gilbert & Bakan (1973). Rhodes (1985) proposed a model of hemispheric lateralization for face perception that distinguished between right hemisphere contribution, relevant for early categorization and representation, assigning to the left hemisphere a role in face-name association and retrieval of semantic information associated with faces. Leftward perceptual asymmetries for the recognition of face attributes, supporting such right-hemispheric involvement, have been confirmed in many studies using chimeric faces: for instance, Burt & Perrett (1997) showed that the left bias is present for judgements on sex, age and attractiveness, but not for the recognition of phonemes associated with lip movements. Other studies confirmed and extended these results for not only sex (Luh et al. 1994; Butler et al. 2005; Parente & Tommasi 2008) but also attractiveness and health (Zaidel et al. 1995; Reis & Zaidel 2001), with the possible exception of another attribute, trustworthiness (Zaidel et al. 2003).
Chimeric faces can be presented in free vision in order to observe such left-sided (right-hemispheric) bias. Experiments making use of the divided visual field technique, instead, consist in the very brief (tachistoscopic) presentation of stimuli accurately confined to the left or the right visual hemifield, ensuring that the stimuli are processed by the hemisphere contralateral to the side of presentation. Using this technique with faces as stimuli in a variety of tasks, a right-hemispheric superiority for facial recognition has been substantially confirmed by various researchers during many decades (Rizzolatti et al. 1971; Leehey et al. 1978; Grega et al. 1988; Rhodes & Wooding 1989; Dutta & Mandal 2002). In this scenario of converging evidence, a factor that has been shown to modulate lateralization is the attribute of familiarity, as different investigations led to conflicting results when the hemispheric asymmetry for the recognition of familiar or famous (versus unknown) faces was tested, some evidence suggesting a left-hemispheric superiority (Marzi & Berlucchi 1977), other studies a right-hemispheric superiority (Levine & Koch-Weser 1982), and others no difference between the hemispheres (Kampf et al. 2002), or interactions between laterality and angle of view (Laeng & Rouw 2001). Clearly, the perception of familiarity of known faces is associated with the retrieval of semantic (including linguistic) information (i.e. the name associated with a given face) that might call for left-hemispheric cooperation beyond the right-hemispheric involvement generally evidenced by much evidence on face perception. Chimeric faces and divided visual field presentations have also largely been exploited to investigate lateralized processing of facial expressions of emotion. The amount of literature is more than abundant on this aspect and two main models have been proposed to explain emotional lateralization (see Demaree et al. (2005) for a comprehensive review). The right hemisphere hypothesis posits a right-hemispheric superiority in production and perception of all emotional expressions (Campbell 1978; Ley & Bryden 1979; Levy et al. 1983; Tucker et al. 1995), whereas the valence hypothesis assumes a left-hemispheric superiority for the processing of positive emotions and a right-hemispheric superiority for the processing of negative emotions (Reuter-Lorenz & Davidson 1981; Borod et al. 1997). The latter model is also in line with another influential model of hemispheric lateralization, the ‘approach–withdrawal’ model, positing that from the standpoint of motivation, the left hemisphere would be more strongly associated with approaching rewards and the right hemisphere to withdrawing from punishments (Davidson 1993). Recent accounts have partially reconciled these two hypotheses, as it seems that the right hemisphere hypothesis could hold true for emotion perception, whereas the valence hypothesis could hold true for the production of expressions and for the conscious experience of emotions (Canli 1999; Gainotti 2000), with a possible higher involvement of the right hemisphere in the perception of basic (when compared with socially complex) emotions (Prodan et al. 2001; Shamay-Tsoory et al. 2008).
Another source of information on asymmetries of face processing comes from neuropsychological studies of prosopagnosia, a pathological condition in which the patient fails to recognize others' faces even when they belong to people encountered frequently (being unimpaired in the recognition of other categories of visual objects). Prosopagnosia is usually associated with bilateral damage in the temporal lobes (Damasio et al. 1982), but unilateral damage to the right hemisphere can suffice to induce the pathology (Kolb et al. 1983; De Renzi 1986; De Renzi et al. 1994). Finally, evidence from split brain patients already present in the pioneering work of the Nobel laureate Roger Sperry, showed that interrupting communication across the hemispheres due to the resection of the corpus callosum, did not prevent the identification of others' faces and self-recognition (Sperry et al. 1979). This special case, the recognition of one's own face, has resurrected more recently as a topic of interest in the study of split brain patients and normal subjects, providing conflicting evidence on the hemispheric biases of self-recognition, much resembling the contradictory results obtained in the case of face familiarity: some studies have reported a right-hemispheric lateralization (Preilowski 1977; Keenan et al. 2001, 2003), other studies a left-hemispheric lateralization (Turk et al. 2002; Brady et al. 2004) and others no asymmetry (Uddin et al. 2005). Neuropsychological data on both prosopagnosic and split brain patients have also concerned the specific issue of perception of emotion expression, and the overall pattern of evidence, taken together with that derived from behavioural studies in normal subjects, confirms that the identification and recognition of facial affect and emotion expression is separate from identity recognition, and might be more strongly lateralized in favour of the right hemisphere (Bowers et al. 1985; Stone et al. 1996; Adolphs et al. 2001; Coolican et al. 2008).
The story turned out to be more complicated with the advent of neuroimaging techniques, as the evidence for right-hemispheric superiority in the perception of faces became less clear-cut than previous research making use of purely behavioural paradigms seemed to suggest. Research has firmly established that the analysis of faces in the human brain depends on a distributed cortical network involving a number of regions in both hemispheres (Haxby et al. 2000; Rossion et al. 2003; Ishai et al. 2005; Ishai 2008). Recent work in non-human primates (both single cell electrophysiology in behaving monkeys and neuroimaging), moreover, is generating evidence supporting the existence of such network (Pinsk et al. 2005; Tsao et al. 2006; Rolls 2007; Gross 2008). It is, however, quite undisputed from neurophysiological work on primates that some aspects of face processing are lateralized, more often in the direction of a stronger involvement of the right hemisphere (Perrett et al. 1988; Zangenehpour & Chaudhuri 2005; Tsao & Livingstone 2008), a result that has been found even in the non-primate brain (sheep: Peirce & Kendrick 2002). The primate brain regions involved in the perception of face identity are areas receiving their input from the occipital cortex, and are located along the occipito-temporal stream in the visual pathway (Ungerleider & Haxby 1994). Neuroimaging studies of face perception have clearly shown that the main stations along this stream are the inferior occipital gyrus (IOG; also known as occipital face area); the fusiform gyrus (FG; also known as face fusiform area, FFA), the STS and the anterior inferotemporal cortex (aIT). The relative contribution of these regions as assessed by fMRI during a number of passive and active tasks on various types of facial stimuli, appears to be differential, some regions being more involved in the analysis of ‘stable’ features necessary to recognize identity, and other regions more involved in the analysis of ‘variable’ features necessary to process intentions and visual cues to communication (i.e. gaze direction and lip movements). Despite this the network is assumed to be largely bilateral, work on IOG, FFA and aIT (a role of STS in gaze selectivity will be dealt with later) has often evidenced a major involvement of the right hemisphere during identification and recognition of faces (Kanwisher et al. 1997; Halgren et al. 2000; Ishai et al. 2000; Grill-Spector et al. 2004; Rotshtein et al. 2005; Kanwisher & Yovel 2006; Kriegeskorte et al. 2007), and it has been suggested that the corresponding areas in the left hemisphere might subserve a more general process of object recognition, in a less face-selective fashion. Other evidence point instead to an asymmetrical subdivision of competences assigning a primacy for global analysis of faces to the right FG and for local or feature-based face analysis to the left FG (Rossion et al. 2000; Harris & Aguirre 2008), a result suggested by previous behavioural and electroencephalographic studies of asymmetries in the processing of inverted faces and objects (Leehey et al. 1978; Levine et al. 1988; Hillger & Koenig 1991; Rossion et al. 1999), and grounded on a more general theoretical framework that hypothesizes that the left hemisphere is specialized for processing high visual frequencies and the right hemisphere is specialized for processing low visual frequencies (Sergent 1983).
IOG, FFA and STS constitute the main centres of a neural network for face processing, and are heavily connected to other areas where further processing takes place. Most notably, connections to the amygdala, insula, basal ganglia, orbitofrontal cortex (OFC) and other regions in the frontal cortex seem to subserve the processing of emotional expression, the assessment of reward content inherent or associated with faces (i.e. attractiveness, status), and self-perception. For instance, Krendl et al. (2006) showed activation of the insula and the amygdala during evaluation of faces associated with negative judgements (stigmatization). Stronger activation of these two regions was shown during observation of faces with physical imperfections (obesity, unattractiveness, and facial piercings). The amygdala is considered a major centre for the processing and integration of emotion and cognition, and it is not chance that emotional expression as perceived on faces strongly evokes activation in this region, as shown by a large number of imaging studies (see Adolphs & Spezio (2006) for a review). Importantly, the activation of the amygdala associated with perception of emotional faces (but also other categories of emotional stimuli) is most often bilateral or lateralized to the left hemisphere: in a meta-analysis about amygdala activation across a large number of PET and fMRI studies (Baas et al. 2004), lateralization was almost invariably found in favour of the left hemisphere when the stimuli involved were emotional faces (see Noesselt et al. (2005) for opposite results). Recently, however, it has been argued (Sergerie et al. 2008) that this pattern of asymmetry might be due to the differential time course of activation decay of the left (slower) and right (faster) amygdalae and its interaction with the fMRI experimental design (block versus event). The anterior insula and the adjacent frontal operculum have been associated more tightly with the perception of the specific emotion of disgust (disregarding input modality, gustatory or visual), but lateralized activation of this region appears most of the time to be non-significant (Calder et al. 2001; Jabbi et al. 2007) or leaning towards the right hemisphere (Phillips et al. 1997).
Thus, behavioural evidence in normal subjects, clinical data and neuroimaging studies strongly support the idea that face processing depends more on right-hemispheric activity, although the asymmetry in itself appears to be largely functional, assuming that a bilateral circuitry for face representation has been ascertained. Importantly, in a recent study (Yovel et al. 2008) a strong correlation was found between the left visual field bias as observed in a chimeric face task, and the asymmetrical activation of the FFA in the right hemisphere during fMRI scans, crucial evidence linking behavioural and neuroimaging work and restoring confidence in the tenability of purely behavioural paradigms.
Gaze plays a central role in social interactions, informing individuals about others' attention, goals and intentions. Perception and interpretation of gaze direction are automatic and effortless processes in normal individuals, while they are altered in autistic subjects and in schizophrenic patients (Pinkham et al. 2008). For studying gaze processing, attention orienting paradigms are usually used (Posner 1980). In such paradigms, a face is first presented centrally with gaze directed towards the subject or invisible because masked, it is followed by the same face with the eyes looking to the left or right. Finally, a target is presented to the left or right, congruently or incongruently with the gaze cue: chronometric differences in the recorded reaction times between conditions can reveal the effects on attention allocation due to the presence or absence of the gaze and its direction. Given the importance of perceiving others' gaze, one might expect that it receives specialized processing in the brain. Studies investigating the neural substrates of gaze direction processing have found that the vision of moving eyes activates the STS. As reviewed by Allison et al. (2000), much evidence suggests that the STS plays a central role in the perception of gaze, together with the FG (Kanwisher et al. 1997; Puce et al. 1998; Wicker et al. 1998; Hoffman & Haxby 2000; Haxby et al. 2002; Hooker et al. 2003) and the amygdala (Kawashima et al. 1999; Adolphs et al. 2005; Benuzzi et al. 2007).
Some evidence is also suggestive of hemispheric asymmetries in the neural substrates of gaze perception, and behavioural research on domestic chicks show that these might be relatively conserved phylogenetically (Rosa Salva et al. 2007). Data on humans come from Ricciardelli et al. (2002) who showed that eye gaze is processed better when presented in the left visual field, as opposed to when it is presented in the right visual field, supporting the idea of a right-hemispheric specialization for eye gaze perception. Moreover, Calder et al. (2007), using fMRI, demonstrated that clusters of neurons of STS selectively sensitive to eye gaze direction are primarily present in the right hemisphere. A similar result was also found for the right inferior parietal lobule. In an experiment (Conty et al. 2007), in which EEG activity was recorded during the presentation of faces suddenly turning their gaze towards the subject or away from her/him, a negative peak emerged at 170ms (N170) after stimulus presentation, markedly enhanced in the condition of direct gaze when compared with averted gaze. The source analysis revealed a complex network of sources, composed of four clusters of activation: two in dorsomedial prefrontal regions, one in the OFC and the fourth in the right STS.
Grosbras et al. (2005) performed a wide meta-analysis of brain-imaging studies, finding that the networks responsible for gaze perception are more similar to those involved in reflexive than to those involved in voluntary shifts of attention and eye movements. The analysis indicated that gaze perception, reflexive shifts of covert attention and visually guided eye movements all activate the temporoparietal junction in proximity of the ascending branch of the STS in the right hemisphere. Some lateralization has also been found in the amygdala: Wicker et al. (1998) found a strong activation of the right amygdala during gaze perception. Right-hemispheric lateralization in the processing of gaze perception is also suggested by neuropsychological evidence obtained in patients suffering from brain damage. Akiyama et al. (2006a,b) described a patient with a right superior temporal gyrus lesion, impaired at determining the direction of observed gaze, who was able to interpret arrow cues, in agreement with the results of Hietanen et al. (2006), suggesting that orienting of attention by gaze cues and orienting of attention by arrow cues are not supported by the same cortical network.
Biological motion perception has an important adaptive role; it allows animals to predict future actions of prey, predators and mates, and to decide whether to move towards or away from them (Regolin et al. 2000). Social animals, such as humans, behave largely on the basis of their interpretations and predictions about the actions of others (Blakemore & Decety 2001), and given the evolutionary importance of detecting biological motion, it would be logical to expect neural machinery dedicated to its perception. Brain-imaging studies, in which investigators usually contrasted brain activation produced when observers viewed animations of point lights following the trajectories of joints, limbs and other relevant body parts (PL animations) with activations produced when viewing scrambled versions of the same animations, have attempted to investigate whether the perception of biological motion is subserved by a specific neural network in humans.
A large number of brain areas are involved in biological motion perception: the motion sensitive area MT/V5+, the posterior superior temporal gyrus (pSTG), the posterior superior temporal sulcus (pSTS), the ventral temporal cortex (Thompson et al. 2005), the ventral FG (Vaina et al. 2001; Grossman & Blake 2002), the posterior FG (or fusiform body area; Peelen & Downing 2005), the so-called ‘extrastriate body area’ corresponding to the posterior inferior temporal sulcus (Downing et al. 2001; Astafiev et al. 2004), and the parietal, premotor and inferior frontal regions involved in action recognition and execution (Rizzolatti et al. 2001; Saygin et al. 2004). A central role in this large network is played by the pSTS in the right hemisphere. The STS represents a relay for the dorsal and ventral visual streams (Felleman & Van Essen 1991), in which form and motion information arising from the same person are integrated (Shiffrar 1994; Oram & Perrett 1994). Although STS neurons are largely visual, their activity can be modulated by the motor system (Hietanen & Perrett 1996) and by the amygdala (Aggleton et al. 1980). Using PET, Bonda et al. (1996) identified regions along the posterior portions of the STS (pSTS) that were activated when people viewed coherent, but not scrambled point light actions. This activation was stronger in the left hemisphere during observation of hand movements, and in the right pSTS when subjects observed body motion. Using fMRI, Grossman et al. (2000) found more activation sites in the right pSTS than in the left, and more strongly in response to upright human motion than upside down animations (the ‘body inversion effect’; Grossman & Blake 2001).
In a recent study, Peuskens et al. (2005) found that it has been primarily the pSTS in the right hemisphere that responds strongly to human motion, a trend apparent also in the results of other works (Beauchamp et al. 2003). The pSTS is also robustly activated when one views whole-body motions rather than PL sequences (Pelphrey et al. 2003), as well as when one views motion confined to specific limbs or to the eyes, hand and mouth (Calvert et al. 1997; Puce et al. 1998; Grèzes et al. 2001). Interestingly, brain areas including the pSTS in the right hemisphere also respond robustly when people view humanly impossible movements (Costantini et al. 2005). Santi et al. (2003) used fMRI to dissociate brain areas responsive to whole-body actions portrayed in PL animations from brain areas responsive to visible speech rendered using PL animations. While there were a few overlapping activation areas, the speech animations selectively activated the left pSTS, portions of auditory cortex, and a network of motor regions including Broca's area; the whole-body PL animations, besides activating the pSTS in the right hemisphere, selectively activated the FG bilaterally and a network of more rostrally located cortical areas that Santi et al. believe are involved in the mirror neuron system. In their fMRI study, Wheaton et al. (2004) found that face, hand and leg motion activate the STS, MT/V5+, anterior intraparietal cortex (aIP) and ventral premotor cortex, predominantly in the right hemisphere. Saygin (2007) examined biological motion perception in 60 unilateral stroke patients, and found no evidence in the neuroanatomical data suggesting either left or right hemisphere dominance for biological motion perception. Saygin supposed that the right lateralization of biological motion perception in previous studies may be explained by the ‘social’ aspects elicited by human motion rather than by body movement per se. The central role of the right temporal cortex is instead supported by other neuropsychological data: Vaina & Gross (2004) described four patients with right temporal lesion, who failed to recognize biological motion, while able to report correctly the global direction of the point lights in the display, indicating that they were not ‘motion blind’.
A particular type of human movements consists of gestures. During human interactions, gestures assume many meanings and the perception and recognition of gestures are cognitive abilities that allow us to interpret, and even predict, the actions of others (Meltzoff & Decety 2003; Rizzolatti & Craighero 2004). Gestures can be transitive, if the action involves the use of a tool and/or an object, and intransitive (usually hand action with a symbolic connotation). Some authors (i.e. Kendon 2004) think that gestures, which palaeoneurologists suppose to be derived from tool use (Frey 2008), belong to the linguistic system. It has also been suggested that language evolved from manual gestures: the relationship between hand and mouth begun as ingestive movements, progressively adapted for communication (Gentilucci & Corballis 2006). Others posit that gesture and speech are two different communication systems, depending upon different brain structures (Levelt et al. 1985; Hadar et al. 1998). The localization evidence obtained so far points to a bilateral brain network underlying gesture perception, that however includes many of the known structures in the right hemisphere that have been shown to support the perception of biological motion per se: for instance, an fMRI study by Villarreal et al. (2008) demonstrated an extensive, common network underlying the recognition of gestures consisting of the right pre-supplementary motor area, the right STS/STG, the left inferior posterior parietal cortex, and bilateral superior posterior parietal cortex (PPc), precuneus, fusiform gyri, occipitotemporal regions and visual cortices. Aziz-Zadeh et al. (2006), studying the lateralization of the mirror neuron system, found that the visuomotor regions of the system engaged during action observation and imitation (when compared with mere execution) are largely bilateral, suggesting an evolutionary independence of the language (left-lateralized) system from gestural and visual inputs.
To further understand the differences and commonalities of language and gesture, it is useful to consider in short the case of sign language. In subjects that communicate using sign language bilateral cerebral activation has often been observed. This could be explained by the fact that in sign language speakers both hemispheres mediate different kinds of information: damage to the right hemisphere in American Sign Language produces deficits in processing the spatial topography associated with the relative positioning of arms and hands, whereas damage to the left hemisphere reduces the expression of grammatical relationships conveyed by signs. MacSweeney et al. (2004) compared the neural correlates of viewing a gestural language and a manual-brachial code (Tic Tac) relative to a low level baseline task, and found activation in an extensive frontal-posterior network in response to both types of stimuli. The superior temporal cortex, including the planum temporale, was activated bilaterally in response to both types of gestures. In signers, a greater activation for sign language than for Tic Tac was observed in the left posterior STS and STG, extending into the supramarginal gyrus. This suggests that the left posterior perisylvian cortex is of fundamental importance to language processing, regardless of the modality in which it is conveyed. Furthermore, Sakai et al. (2005) used fMRI to examine hemispheric dominance during both the processing of signed and spoken sentences in normal and deaf subjects. They found left-dominant activations involving frontal and temporoparietal regions: the ventral part of the inferior frontal gyrus, the precentral sulcus, the superior frontal gyrus, the middle temporal gyrus, the angular gyrus and the inferior parietal gyrus. Finally, perception of fingerspelling (in which different hand configurations are used to produce the different letters of the alphabet) determines activation in both the left and the right mid-FG (Waters et al. 2007). Biological motion as a visual input for social perception appears thus to be largely dependent on a neural network lateralized in the right hemisphere, but cooperation of left-hemispheric structures is demanded for processing the content of those biological motions that convey meaning, i.e. gestures.
The social systems of many species rely upon chemical signals passed between individuals and carrying information about reproductive and health status, and individual identity that have important influences on adaptive processes (Brennan & Kendrick 2006; Shepherd 2006). Chemical communication is based on a peculiar class of signals that differ from stimuli such as light, sound and pressure, well described by systematic variations in wavelength, frequency or other physical dimensions. Chemical stimuli have no equivalent metric because they can be described only using a multidimensional space (Haddad et al. 2008). The general idea is that chemical signals can be no longer classified as volatile (odorants) and not volatile, but should be grouped primarily based on different behavioural and physiological effects (Sbarbati & Osculati 2006): semiochemical (i.e. a food smell—substances eliciting a conscious perception and carrying information categorized as attractive, repulsive, stimulating, deterrent, etc.), allelochemical (substances produced by members of one species influencing the behaviour of members of another species, eliciting conscious perception, typical of prey–predator relationship and symbiosis), pheromones (substances eliciting an unconscious perception and triggering a behavioural response in another member of the same species, often activating a neuroendocrine response) and vasana (social chemosignals that are neither classifiable as odours nor as pheromones, that are not consciously detectable as odours, and do affect psychological state yet not triggering a unique set of behavioural, neural or endocrine responses). Also in humans, the chemical environment is perceived starting from the detection of these classes of stimuli in the known chemosensory systems (olfactory system, trigeminal system, vomeronasal organ) and ascending to several higher brain areas. In particular, pheromones stimulate olfactory as well as vomeronasal sensory neurons (Zufall & Leinders-Zufall 2007). For instance, a class of pheromones, major histocompatibility complex class I peptides, have been shown to activate both olfactory receptors involved in mate choice (Boehm & Zufall 2005) and vomeronasal sensory neurons required for the Bruce effect (the exteroceptive block of pregnancy; Bruce 1959), as shown by Spehr et al. (2006). The traditional distinction that common odours are perceived through the olfactory pathway and pheromones through the vomeronasal pathway appears incorrect or too simplistic. Furthermore, bypassing the discussion on humans lacking a functional vomeronasal system, it has been demonstrated by functional imaging that sex pheromones activate several regions of the human brain (Savic et al. 2001).
One key aspect is the fact that the anatomical pathways of chemical senses are organized ipsilaterally whereas vision and hearing are organized contralaterally. Olfactory inputs emerging from the main olfactory and/or vomeronasal systems are processed by the ipsilateral primary olfactory cortex (including the piriform cortex and entorhinal cortex) and subsequently at the level of the OFC, insula and amygdala, the latter playing a major role in the learning and recognition of social chemosignals as well as being a hub for visual and acoustic emotion-related information. The cross-modal integration of information processed by chemosensory systems with that processed by the visual and auditory systems, provide interesting evidences of asymmetries that extend our current view on the lateralization of the human social brain. One basic fact that sets the stage for olfactory lateralization is the evidence that the maximum nasal airflow rate is congruent with handedness: cycles of breathing during which one nostril dominates alternate, however, the right nostril is used more frequently by right-handers and the left nostril is used more frequently by left-handers (Searleman et al. 2005).
Despite the ipsilateral organization of the main neuroanatomical pathways, behavioural and imaging evidence indicate that olfactory stimuli are processed both by the ipsilateral and the contralateral hemisphere (Savic & Gulyas 2000). However, many studies show that the right hemisphere appears to be involved more than the left in the recognition and evaluation of olfactory stimuli disregarding the side of nostril stimulation (Zucco & Tressoldi 1988; Zatorre & Jones-Gotman 1990; Zatorre et al. 1992b; Zald & Pardo 2000; Dijksterhuis et al. 2002), and the right OFC seems to have a special role in this circuitry. However, there are many aspects of olfactory lateralization that seem to strongly depend on the task accomplished or the stimuli used (Brand et al. 2001; Royet & Plailly 2004): when subjects are exposed to strongly aversive stimuli, the activity of the OFC is stronger in the left hemisphere and a bilateral activation of the amygdala is observed (Zald & Pardo 1997; Anderson et al. 2003). Quite in disagreement with the right hemisphere hypothesis mentioned above, it has been suggested that activation of the OFC would be lateralized in the left or the right hemisphere according respectively to the emotional aspects (i.e. detecting the pleasantness or the edibility of a stimulus) or familiarity (i.e. the recognition of a known stimulus; Royet & Plailly 2004). More intriguingly, differential patterns of asymmetric activation of the OFC emerge depending on the level of pleasantness of stimuli: as mentioned, it was reported that a higher left-hemispheric activity (in the OFC) can be observed during exposure to aversive stimuli (Zald & Pardo 1997; Anderson et al. 2003). However, when a subject is exposed to a pleasant smell, the asymmetry reverses and a higher activity of the right hemisphere is observed in the OFC and the piriform cortex (Zatorre et al. 2000; Gottfried et al. 2002), an asymmetry that had been suggested also by psychophysical evidence based on unilateral presentation (Herz et al. 1999; Dijksterhuis et al. 2002). These results are also in disagreement with the mentioned valence hypothesis, so it appears that data on olfactory lateralization are not easily reconciled with models of emotional lateralization. One factor that to our knowledge has never been stressed in discussion on olfactory lateralization measured by means of behavioural tests, however, is the possible confound due to the fact that unilateral nostril breathing, which is necessarily used in lateralized presentation of odorants, has been shown to modulate the performance of verbal and spatial tasks in a manner that is consistent with the functions of the hemisphere contralateral to the open nostril (Schiff & Rump 1995). Little is known on the laterality of what could be considered the most obvious manifestation of olfactory social perception: personal body odour. In a PET study carried out on female subjects, Lundström et al. (2008) have recently shown that brain activation induced when smelling body odours was topographically different from activation induced by non-body odours. Interestingly, body odours activated a circuit including more non-olfactory than olfactory regions, an example being robust right-hemispheric activation in the occipital and angular gyri. The familiarity of body odours (i.e. personal odour, odour of a friend, odour of a stranger) induced further topographical differentiation, with a higher left-hemispheric activation (including insula and amygdala) following exposure to a personal odour never smelt before, and higher right-hemispheric activation following exposure to personally familiar body odours.
Many studies indicate that putative human sexual pheromones are androstadienone (4,16-androstadien-3-one, AND), androstenol (5α-androst-16-en-3α-ol), androstenone (5α-androst-16-en-4-one) and estratetraenol (1,3,5(10),16-estratetraen-3-ol, EST). Strong evidence of the effect of pheromones on behaviour and physiology in humans ranges from effects on the synchronization of menstrual cycle (McClintock 1971; Graham 1992; Weller & Weller 1993) to avoidance or preference for sitting on a chair sprayed with putative male pheromone (Cowley et al. 1977; Kirk-Smith et al. 1978). Male pheromones act on females during the ovulatory period, inducing a stronger sexual selection of symmetric male faces (Thornhill & Gangestad 1999), and around ovulation women perceive the male pheromone as less aversive compared with other cycle phases (Grammer 1993). Interestingly, the large amount of behavioural research on pheromones has neglected possible links to hemispheric lateralization: only recently neuroimaging work has been started to provide data on brain activity underlying pheromone processing, and asymmetries have started to emerge. In females, it has been shown that AND activates the anterior ventral hypothalamus, mostly in the preoptic and ventromedial nuclei, but not the olfactory regions (piriform, orbitofrontal and insular cortex) and the amygdala, which are instead activated (mostly in the right hemisphere) when EST is smelt (Savic et al. 2001). On the other hand, the hypothalamus (especially the paraventricular and dorsomedial nuclei) but not olfactory regions is activated in males when smelling EST. When males smell AND, the amygdala, piriform cortex, cerebellum and postcentral gyrus appear to activate, predominantly in the right hemisphere. This sex-dependent pattern of activation, apart from revealing a physiological substrate for a differentiated sexual response in humans, is thus accompanied by lateralization patterns when subjects are exposed to pheromone-like substances. These results were confirmed and expanded in further studies in which the brain activation of homosexual individuals was compared with that of heterosexual subjects when exposed to EST or AND (Savic et al. 2005; Berglund et al. 2006). Of note, when homosexual men smelled EST the left amygdala and piriform cortex were primarily recruited (although with inclusion of a minor portion of the anterior hypothalamus), whereas asymmetries in activation when smelling AND were comparably weaker in lesbian women. Savic & Lindström (2008) have recently analysed in more detail the hemispheric asymmetries of homo- and heterosexual subjects. Magnetic resonance volumetry and PET measurements of regional cerebral blood flow were carried out in heterosexual and homosexual men and women, to investigate respectively structural asymmetries and functional connectivity of the amygdala. Heterosexual men and homosexual women showed larger right hemispheres, whereas volumes of the cerebral hemispheres were symmetrical in homosexual men and heterosexual women. Homosexual subjects, however, showed sex-atypical connections to the amygdala. In both homosexual men and heterosexual women, the connections were more widespread from the left amygdala, whereas in heterosexual men and homosexual women, from the right amygdala. This result echoes the recent discovery that a greater functional connectivity of the right amygdala at rest is present in males but not in females, whereas a greater functional connectivity of the left amygdala is observed in females but not in males (Kilpatrick et al. 2006). The overall pattern of lateralization underlying pheromone processing appears thus to be strongly dependent on sex: it is predominantly right hemispheric in males and less so in females.
Recognizing others and keeping track of their identity in memory is a function necessary to assign appropriate roles to agents in the social environment. Functions of social perception require special attention because of their role for survival beyond individual recognition and identification, such as in judgements needed for kin selection, mating, cooperation and competition, and in the understanding of others' mental and affective states necessary for interacting. Finally, the tight coupling between sensory and motor processes underlying the execution of one's own actions and the observation of others' actions, due to its importance for social regulation (i.e. in empathy, imitation, communication), gives an idea of how much the functions of the nervous system are adapted to the interactive nature of human sociality. The study of single- and multi-modal cues to social perception has revealed that there are strong reciprocal influences between auditory, visual and chemical inputs (Kovács et al. 2004; Platek et al. 2004; Campanella & Belin 2007), and in this review we have attempted to summarize the current knowledge on their neural bases, with a focus on hemispheric asymmetries.
It is, however, hard to draw clear-cut conclusions from the amount of evidence here presented on the lateralization of these cues and their interaction in social perception, of which this review can be considered simply the tip of the iceberg. Even more embarrassing is the fact that many crucial aspects have been completely or partly left out of the review, sex differences and handedness being probably the most crucial given their relevance for both social perception and brain asymmetry. However, it is not impossible to formulate some remarks on emerging aspects that might be valuable for better understanding the role of brain asymmetries in social neuroscience. In all of the three modalities considered, the assignment of a dominant role to the right hemisphere in social perception would appear well deserved. For many of the functions reviewed, stronger involvement of the right hemisphere in coding some aspects of person perception seems to be the rule, whereas involvement of the left hemisphere appears to sometimes be a shared role, and only exceptionally a main role. Before neuroimaging, purely behavioural investigations and clinical studies on unilateral lesions in patients provided strong and suggestive evidence of right-hemispheric lateralization for social perception. Taking the ‘face processing circuit’ as an example (Ishai 2008; Tsao & Livingstone 2008), neuroimaging and electrophysiology allowed us to get a clearer and more detailed description of the neural topography and chronometry than could be obtained using behavioural techniques and lesion data, and the discovery of the substantial bilaterality of brain structures dedicated to representing faces, in spite of much evidence on left visual field advantages in face and emotion perception, pops out as a topic for discussion. Generalizing what has been discovered about the neural bases of face processing, both hemispheres appear structurally endowed for processing cues of social perception, and the asymmetry is evident in the net balance of right-hemispheric activation, both when social perception is superficial and transient (Haxby et al. 2001) and when it is focused on specific social cues such as sex, if sex is relevant for social judgement (Yovel et al. 2008). The same seems to be true for more complex domains of social representation, such as theory of mind (ToM), the ability to attribute mental states to other individuals. ToM is in fact believed to involve a bilateral neural network (Gallagher & Frith 2003), but the activity of this circuit depends also on the right-lateralized contribution specifically involving the orienting of attention to emotional cues that are present in faces (Narumoto et al. 2001). Right-hemispheric asymmetries found ubiquitously in social neuroscience might thus depend on the existence of an asymmetrical triggering mechanism in one or more basic functions outside the recently discovered neural correlates of social perception. Spatial attention is one first candidate, as the orienting of attention necessary to establish a first person perspective in space strongly depends on right-lateralized structures (Vogeley & Fink 2003), and the automaticity in attending spatial locations is also believed to depend on the right hemisphere (Corbetta & Shulman 2002). In order to perceive others as separate from ourselves, it is of fundamental importance that a solid spatial framework centred on the observer is maintained and updated. Spatial mapping and the addressing of attention to locations in space are two functions needed to this aim, and they engage a neural network with a crucial role of the PPc in the right hemisphere. Spatial attention might thus be a right-hemispheric function driving the activation of ipsilateral neural structures specialized for person perception, and an obvious advantage coming with this guiding function is that relative spatial positions of ‘self’ and ‘other’ would be available automatically before any further processing takes place. As spatial information (distance, orientation, etc.) is essential for social perception, it would not be implausible to suppose that the right social brain is ‘primed’ by the right spatial brain.
Another function that might have a driving effect for the lateralization of social perception could be the major role in avoidance behaviour attributed to the right hemisphere, in opposition to the left hemisphere having a major role in approach behaviour (Davidson 1993, 2003). This RH-avoidance/LH-approach system might reasonably have its default mode in the most conservative of the two dispositions (avoidance), whose effects are immobility and freezing, attentive scrutiny of novelties, and energy conservation (Braun 2007), all behavioural aspects that might have conferred an evolutionary advantage over an uncontrolled approach. Avoidance can be a winning default strategy but it cannot be the only strategy. Past the first phases of interpersonal knowledge, when humans become acquainted with each other, affiliative behaviour is demanded. The attribute of familiarity certainly has a key role in social perception, and it appears to be the most controversial factor in modulating the lateralization of the social brain. A mixed pattern of zero-, left- and right-asymmetric lateralization emerges from studies on the neural substrates underlying the processing of the familiarity of faces, voices and personal odours and it is not clear how the approach/avoidance hemispheric subdivision could explain the discordant results obtained across these different modalities. Actually, right-hemispheric lateralization has been found more frequently in familiarity tasks, but the presence of counterevidence is puzzling, and calls into question aspects of the familiarity attribute that bring us to the final considerations, concerning the dependence of the social brain upon the brain regions responsible for language processing. A large amount of evidence (Banich 1998, 2003; Weissman & Banich 1999) shows that interhemispheric communication is strongly beneficial for allocating resources to demanding tasks, but that distracting factors (such as the Stroop effect) can also produce interference across the hemispheres (Compton et al. 2000). Compton (2002) extended the idea of the interhemispheric cooperation advantage to the domain of faces, showing that identity and emotion comparisons are best carried out across the hemispheres. This demonstration lends support to the fact that both hemispheres are capable of processing social information, and that the net lateralization of social perception might be a consequence of other determinants. Recently, it has been demonstrated that interhemispheric transfer of social information (faces) can be influenced by verbal information (words) in the other hemisphere when the tasks to be accomplished on the two types of information differ (dual task; Bergert et al. 2006). Importantly, the direction of the interhemispheric transfer necessary to accomplish the tasks (left to right or right to left) had no effect, supporting the idea of an equally distributed representation of social information across the hemispheres. Surprisingly, Hirnstein et al. (2008) showed that the strength of hemispheric lateralization measured in subjects before carrying out a dual task demanding interhemispheric transfer (of faces and words), was inversely correlated with the success in managing the tasks, further confirming that equal hemispheric resources are preferable and pay more than lateralized resources. If the social brain is structurally bilateral but functionally right-sided, as results from much empirical evidence, the complementary left-sidedness of the language function might have favoured inter- over intrahemispheric communication for all those situations in which non-verbal and linguistic verbal information must interact, such as in associating semantic information (i.e. names) to perceptual appearance, or more commonly in linguistic interactions. Moreover, interhemispheric cooperation has been shown to facilitate familiarity encoding through repeated experience (Mohr et al. 2002). Given the strength and stability of the left-hemispheric asymmetry of language processing, the advantages apparently conferred by interhemispheric transfer might have further supported the right-hemispheric asymmetry in social perception, together with the bootstrap effect of spatial attention, and the influence of an avoidant (conservative) default state. Lateralization of the social brain might thus be the net result of several forces, ultimately relevant for sociality and interaction that act concurrently on the right hemisphere.
The authors acknowledge the financial support of the Commission of the European Communities, through the project EDCBNL (Evolution and Development of Cognitive, Behavioural and Neural Lateralization—2006/2009), within the framework of the specific research and technological development programme ‘Integrating and strengthening the European Research Area’ (initiative ‘What it means to be human’).
One contribution of 14 to a Theme Issue ‘Mechanisms and functions of brain and behavioural asymmetries’.