Search tips
Search criteria 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Neuropsychologia. Author manuscript; available in PMC 2011 January 1.
Published in final edited form as:
PMCID: PMC2794980

Nonverbal auditory agnosia with lesion to Wernicke’s area


We report the case of patient M, who suffered unilateral left posterior temporal and parietal damage, brain regions typically associated with language processing. Language function largely recovered since the infarct, with no measurable speech comprehension impairments. However, the patient exhibited a severe impairment in nonverbal auditory comprehension. We carried out extensive audiological and behavioral testing in order to characterize M’s unusual neuropsychological profile. We also examined the patient’s and controls’ neural responses to verbal and nonverbal auditory stimuli using functional magnetic resonance imaging (fMRI). We verified that the patient exhibited persistent and severe auditory agnosia for nonverbal sounds in the absence of verbal comprehension deficits or peripheral hearing problems. Acoustical analyses suggested that his residual processing of a minority of environmental sounds might rely on his speech processing abilities. In the patient’s brain, contralateral (right) temporal cortex as well as perilesional (left) anterior temporal cortex were strongly responsive to verbal, but not to nonverbal sounds, a pattern that stands in marked contrast to the controls’ data. This substantial reorganization of auditory processing likely supported the recovery of M’s speech processing.

Keywords: Environmental sounds, temporal cortex, aphasia, dissociation, fMRI


Auditory agnosia is a rare neuropsychological disorder that is characterized by a relatively isolated deficit in auditory processing, despite normal hearing. The disorder has been associated with bilateral temporal or subcortical lesions; less frequently, unilateral lesions have also been reported (see Clarke, Bellmann, Meuli, Assal, & Steck, 2000; Griffiths, 2002; Saygin, Dick, Wilson, Dronkers, & Bates, 2003; Vignolo, 1982, 2003).

Auditory agnosia restricted to nonverbal sounds is an even rarer phenomenon, previously associated with bilateral (Albert, Sparks, Stockert, & Sax, 1972; Kaga, Shindo, Tanaka, & Haebara, 2000; Kazui, Naritomi, Sawada, Inoue, & Okuda, 1990; Spreen, Benton, & Fincham, 1965) or right hemisphere (Fujii et al., 1990) lesions. Here, our focus is on the processing of sounds for meaning, or on what has sometimes been called associative, as opposed to apperceptive auditory agnosia (Buchtel & Stewart, 1989). When restricted to the nonverbal domain, previous case studies have mainly focused on apperceptive auditory agnosia, and as such, the present case report is unique in the literature.

We report the case of Patient M, who presented with chronic auditory agnosia restricted to nonverbal sounds following a lesion to the language dominant hemisphere. M suffered a left posterior temporoparietal infarct, including Wernicke’s area (Fig. 1). We report on the patient’s clinical history, audiological tests, and a detailed assessment of his auditory comprehension, accompanied by an acoustical analysis of the sounds he could and could not recognize. We also assessed his neural responses to verbal and nonverbal auditory material using functional magnetic resonance imaging (fMRI).

Figure 1
MRI of the patient’s lesion. 5 selected axial images are shown to depict the extent of lesion. Neurological convention.

Case History

The patient, (here referred to as ‘M’, not his real initial), was male and was 74 years old at the time of testing. M suffered a cerebrovascular accident at 62 years of age, suddenly developing right-sided numbness and difficulty speaking. Neurologic examination at the time diagnosed acute stroke and revealed decreased sensation and control over the right extremities and language impairment.

After medical stability was achieved, M underwent comprehensive occupational, physical, psychological and speech-language therapy, making substantial gains in all these areas. Specifically, he underwent intensive speech therapy for 12 weeks. His wife reported that after this period, they continued to work at home for several hours a day every day, with the patient eventually regaining his language abilities.

The patient was not likely to have had atypical language lateralization before his stroke. He was right-handed, never changed handedness, and had no left-handed relatives. Furthermore, had he had right-hemisphere dominance for language, it is less likely that he would have presented with language deficits following his stroke. The patient did not show deficits associated with right hemisphere infarct such as visuospatial neglect following his stroke, nor at the time of testing (Saygin, Wilson, Dronkers, & Bates, 2004; Saygin, 2007). MRI, acquired at the time of present tests, confirmed an extensive lesion covering left temporal and parietal cortex (Fig. 1), with no evidence of subsequent infarct.

The patient came to our attention 10 years after his stroke. His aphasia had resolved into effective verbal communication and fluent speech, with occasional word-finding difficulties (usually with long or infrequent words, e.g., “rehabilitation”, “bronchitis”, “jacuzzi”). On the Western Aphasia Battery (Kertesz, 1979), his aphasia quotient (AQ) was 91/100 and he was classified as an Anomic aphasic. His language comprehension was found to be unimpaired. In particular, he scored 60/60 in auditory word recognition. This performance pattern clearly rules out the word deafness and verbal comprehension deficits that are commonly associated with posterior lesions of the left hemisphere.

Speech and Environmental Sound Comprehension

Despite his high-functioning neuropsychological profile and mild (and solely productive) aphasia, M showed a severe impairment in recognizing environmental sounds. This was first observed when M participated in an experiment exploring verbal and nonverbal auditory comprehension in aphasic patients (Saygin et al., 2003). In this study, a group of left hemisphere-injured patients (N=29) and age-matched controls were asked to match environmental sounds and corresponding verbal phrases to visually presented pictures of objects (e.g., picture of a cat for the phrase “cat meowing” or the sound of a meow). The same stimuli and paradigm have since been successfully used to contrast comprehension of verbal and nonverbal comprehension of auditory material in a number of populations, details of which have been provided in previous publications (Cummings, Saygin, Bates, & Dick, 2009; Dick et al., 2007; Dick, Saygin, Paulsen, Trauner, & Bates, 2004; Saygin, Dick, & Bates, 2005; Saygin, Dick, Wilson, Dronkers, & Bates, 2003).

Deficits in the comprehension of speech sounds and environmental sounds went hand in hand for the left-hemisphere injured patients, with strong correlations between performance in verbal and nonverbal sound comprehension for both accuracy (p<0.0001; r2=0.75) and for reaction time (p<0.0001; r2=0.96). However, whereas Patient M showed a significant deficit in environmental sound comprehension (in comparison to controls, z=−6.09, p<0.0001), he was well within the normal range for speech comprehension (z=0.7, p>0.1). There were three other left hemisphere-lesioned patients who performed worse than M on the nonverbal material, but importantly they did not show a selective deficit, presenting with correspondingly poor performance in speech comprehension. Figure 2 shows M’s accuracy and reaction time for speech (verbal) and environmental (nonverbal) sounds, along with the rest of the left hemisphere-injured patients (Saygin, Dick, Wilson, Dronkers, & Bates, 2003). M was the only participant to demonstrate a quantifiable dissociation between verbal and nonverbal sound processing in this experiment (Bates, Appelbaum, Salcedo, Saygin, & Pizzamiglio, 2003).

Figure 2
M’s accuracy (A) and reaction time (B) plotted along with all left hemisphere injured patients in a previously published study of speech and environmental sounds comprehension (as measured in a sound to picture matching task, Saygin et al., 2003 ...



A clinical audiologist conducted standard tests and otoscopic examination, as well as Auditory Brainstem Response (ABR) measurements at 60 dB, 80 dB, and 90 dB, 100-3000 Hz. Sound localization and music recognition were assessed using non-standard tests.

Behavioral Testing

We further tested M’s environmental sound comprehension using a large, extensively normed sample of environmental sounds. This was a sound naming experiment that had previously been administered to neurologically healthy subjects, and published in Saygin, Dick, & Bates (2005).

M was asked to listen to set of environmental sounds and to provide a verbal description for each sound. The test was conducted at the patient’s home and was videotaped. The auditory stimuli were presented using a Macintosh portable computer and Yamaha YST-M7 speakers. The experimenter initiated each trial by pressing a key. Breaks were taken as necessary. There were a total of 236 sounds in the experiment. Seven trials had to be excluded due to noise or distraction during the sound presentation. As described in Saygin, Dick, & Bates (2005), each verbal response was given a score between 0 and 2 by two independent raters, where 0 denoted a wrong response (or no response), 1 denoted a response that was not exactly correct but has common elements with the correct response, and 2 denoted a correct response.

Before naming each sound, neurologically healthy subjects were instructed to press a button as soon as they believed they had identified each sound (Ballas, 1993; Saygin, Dick, & Bates, 2005). While testing M however, the task proved too difficult, and even when he did respond, he often forgot to press the button, or pressed it after he responded. Thus there were not enough valid response times and only accuracy data will be reported.

Acoustical Analyses

Despite his severe deficit in environmental sound comprehension, M could identify some sounds (see Results). In order to discover any acoustical correlates underlying M’s deficit, we examined 18 acoustical measures of each environmental sound’s envelope, spectral characteristics and harmonics in relation to M’s performance - see Table 1 (Gygi, Kidd, & Watson, 2004, 2007; Lewis et al., 2004).

Table 1
Acoustical measures


We explored M’s neural responses to verbal and nonverbal auditory material using functional magnetic resonance imaging (fMRI). As a reference we also collected data from neurologically healthy controls.

Because M was claustrophobic, we had to design the fMRI study to keep the time in the MRI scanner short, and the procedure as simple as possible. We used a high signal-to-noise block design with no active task. Despite the short duration of the experiment, we were able to characterize the patient’s brain responses to speech and environmental sounds.

Patient M, 2 age-matched neurologically healthy controls, and 9 neurologically healthy controls aged 25-40 participated. Data from one of the younger control subjects could not be used due to timing problems on the stimulus computer. All participants gave informed consent in accordance with local ethics. The age-matched controls were scanned using a Varian 3T scanner with a custom head coil. The remaining controls were scanned using a GE 3T scanner and a phased-array head coil. Because we did not want to average scans from different scanners, we report the data from the younger controls and the older controls separately.

We used a standard single-shot echo planar T2*-weighted gradient echo pulse sequence (TR=2400 ms, TE= 30 ms, linear auto-shim) and acquired 31 interleaved slices covering the whole brain (3.75×3.75×3.8 mm voxels, 0 mm gap). We also acquired a multi-TE B0 field map, used to correct for distortions in phase-encode direction, and a structural image from each subject (MPRAGE, TR=10.5 ms, TE=4.8, 1×1×1.5 mm voxels).

The experiment featured a mixed block design with three block types of 24 seconds: 1) Environmental sounds (taken from the behavioral experiment); 2) Speech sounds (moderate frequency bi or tri-syllabic nouns, or verb phrases corresponding to the environmental sounds, all recorded in a sound-proof booth); or 3), Silence. For more information on the stimuli, please see (Saygin, Dick, & Bates, 2005).

Participants listened to the stimuli with eyes closed and were instructed to try to comprehend each sound. Sounds were presented in stereo, rapidly following each other with 100-120 ms ISI and an additional 200-300 ms between blocks. Prior to the actual scan, sound volume was adjusted individually for each subject such that the stimuli were loud enough to hear and identify over scanner noise with earplugs, but not too loud to cause discomfort. Each participant was scanned in 2 runs, each lasting approximately 5 minutes and 50 seconds.

FMRI data were analyzed using SPM5 ( The functional images were co-registered to correct for head movements and for the controls, spatially normalized to the standard (MNI) template . Each participant’s hemodynamic response was characterized using a boxcar function convolved with a synthetic hemodynamic response function (HRF), their temporal derivatives, a constant term, a set of cosine basis functions serving as a high-pass filter. The head movement parameters estimated during the preprocessing stage were also included in the model. For each subject-specific model, linear contrasts were derived from the regression parameters. These images were spatially smoothed using an isotropic Gaussian kernel (6 mm full-width-at-half-maximum). For controls, they were then taken to a random-effects analysis.

A whole brain comparison of M to controls was not possible (since that would have entailed running an ANOVA with a group N=1). Thus, to quantitatively evaluate differences in brain activity between M and controls we used a region of interest (ROI) approach, where we explored responses in the controls in regions where M exhibited significant (p<0.05, with FDR correction and a minimum cluster size of 10 voxels, see below) differences between speech and environmental sounds, and vice versa.


We explored M’s auditory processing with: 1) audiological tests to rule out unusual hearing problems; 2) additional, extensive behavioral testing; 3) acoustical analyses on the sounds that he was able and unable to recognize; 4) functional MRI to explore post-insult neural reorganization for verbal and nonverbal sounds.

Audiological Findings

Otoscopic examination revealed clear external auditory measures and tympanic membranes bilaterally. Air conduction and masked bone conduction warble tone thresholds revealed normal auditory sensitivity for 250 through 3000 Hz and mild to moderate loss at 4000 to 8000 Hz for each ear. This was within the normal range for the patient’s age. Speech Reception Thresholds (SRT) of 15 dB and 20 dB were obtained for the left and right ears respectively. Most comfortable loudness levels of 65 dB were identified for each ear. Word Discrimination Scores (WDS) of 95% and 100% were obtained for the left and right ears respectively. Impedance measurements reveal Type A tympanograms (normal pressure and compliance) for each ear. An ipsilateral acoustic stapedial reflex of 90 dB was obtained at 1000 Hz for each ear. Auditory Brainstem Response (ABR) evaluation utilizing 60 dB, 80 dB, and 90 dB rarefaction clicks at 100 through 3000 Hz revealed all inner wave and absolute latencies for Waves I, III, and V to be within normal limits.

M was able to localize sounds presented through speakers at all 90 degree quadrants consistently for 500 Hz, 1000 Hz, 1500 Hz, 2000 Hz, 3000 Hz, and 4000 Hz. He was able to identify familiar tunes (Christmas songs) presented through either ear; song identification remained successful with 50 dB equal intensity white noise masking in the contralateral ear.

In summary, audiology revealed mild-to-moderate higher-frequency hearing loss that was not exceptional given the patient’s age. Impedance tests and ABR revealed normal middle and retrocochlear function, ruling out a peripheral explanation for M’s auditory comprehension deficit.

Behavioral Results

Here we describe M’s performance on environmental sound naming both quantitatively and qualitatively. M’s performance on environmental sound comprehension is shown quantitatively in Figure 2 - for further detail, see Saygin, Dick, Wilson, Dronkers, & Bates (2003).

M’s deficit in environmental sound processing was striking. Whereas neurologically normal controls on average achieved 81.9% of the total score in our environmental sound labeling test, M only reached 34.9%, a significantly lower score (p<0.0001; (Saygin, Dick, & Bates, 2005)). M was able to identify only 63 (27.5%) of the sounds, gave incorrect but close answers on 34 (14.8%), and gave wrong answers on 14 (6.1%). For more than half of the test items (118 trials, 51.6%), he could not provide an answer at all, stating he did not understand the sound, stating for example, “all I know is there was sound there”. Explicit errors were rare as reported above, and included reporting “cow” for a sheep, “baby” for a crow, “gunfire” for a helicopter, “laughter” for a man sighing, “motorcycle” for a chainsaw, “bomb going off” for a police car siren, and “train” for slicing bread.

In general, the sounds that M correctly identified were also those sounds that were highly identifiable (correctly identified ≥ 94.5% of controls); and the items he missed were harder to identify (correctly identified by ≤ 71.5% of controls). However, some of the sounds he could not identify (e.g., doorbell, cuckoo clock, ticking clock, ringing telephone, breaking glass, cheering crowd, sheep, traffic, church bells, bird, camera, and car horn sounds) were labeled correctly by 100% of controls. Indeed, a full 28.8% of items that M failed to identify were identifiable by 95% or more control participants.

Overall, while there appeared to be some categories of sounds M tended to process more successfully than others, there was no unequivocal evidence for a category-specific deficit (e.g., animate/inanimate). M tended to label musical sounds as “music”. He was however unable to provide any more information when probed (e.g., when asked what instrument he heard or whether the sound was singing or an instrument, he would say, “music, but that’s it”). In contrast, controls tend to label these sounds more specifically (e.g., violin, clarinet, woman singing, Saygin, Dick, & Bates, 2005). M also categorized some non-musical sounds as music (e.g., basketball bouncing, cuckoo clock, traffic sounds and wind chimes were all labeled “music”). M missed a number of human sounds (screams, sighs). For those he could identify, he could not specify whether it was a male or a female voice that produced the sound (whereas controls often specified the gender without being explicitly prompted, e.g., woman laughing, man yelling (Saygin, Dick, & Bates, 2005)). He could identify some animal sounds (e.g., dog and cat), but not all. He was able to identify very few of the machinery sounds, although he tended to label these “motorcycle”. He had reasonable performance (16/24) in recognizing water sounds (e.g., drips, waves, rain, toilet flush), but, as with his responses to music, he could not identify the sounds more specifically, simply responding with “water”.

M did not appear to be aware of his environmental sound comprehension deficit, nor did he suffer severe functional impairments linked to it. It is likely that contextual cues and information from other modalities allowed him to sufficiently compensate for his auditory agnosia.

Acoustical Analyses

Stepwise multiple logistic regression revealed that 3 acoustical measures significantly predicted M’s behavior on environmental sound identification (overall X2 (3) = 42.58, p<0.0001; Uncertainty R2 = 0.134): These were as follows: 1) the spectral centroid (X2(1) = 17.99, p<0.0001), e.g., the center frequency around which there was the most energy in the spectrum; 2), the standard deviation of the harmonics to noise ratio (henceforth, std-HNR, X2 (1) = 8.34, p<0.01), a measure of how much variation there is in the harmonics over the course of the sound; and 3) total sound duration (X2 (1) = 11.06, p<0.001). Note that some of our acoustical measures were intercorrelated, as reported further in Supplementary Data.

These data indicated that M showed some sensitivity to the coarse spectral qualities of a sound, finding sounds with high spectral centers of gravity more difficult. He was susceptible to aspects of how harmonic the sounds are, performing better on sounds with greater temporal variation in their harmonics. Short sounds were particularly challenging for M.

These effects were robust even when M’s mild hearing loss above 4kH was taken into consideration. To do this, we compared M’s behavioral performance with measures of the spectral centroid and std-HNR for 4kHz low pass filtered versions of the environmental sound. Spectral centroid (X2(1) = 10.22, p<0.01) and std-HNR (X2 (1) = 17.88, p<0.001) were again significantly related to M’s performance. Sound duration (obviously unaffected by low-pass filtering) was still a significant predictor.

To explore properties of sounds that may have been especially difficult for M, we next focused on the items that all healthy controls were able to identify (N=115). For these sounds, the stepwise logistic regression included 6 predictor variables (X2(6)=43.36, p<0.0001; Uncertainty R2=0.278). Like healthy adults (Gygi, Kidd, & Watson, 2007), M appears to find the information in the range around 1000-2000 Hz easiest to identify. As with the analyses run on all items, the spectral centroid, std-HNR, and sound duration were again significant predictors of M’s performance. In addition, the spectral kurtosis, the RMS energy in the octave frequency ranges 500-1000Hz (where rising RMS energy was associated with more correct responses); and RMS energy between 2000-4000Hz (where falling RMS energy was associated with more correct responses) were also correlated. When these analyses were run on low-pass-filtered (> 4 kHz) versions of the sounds, the spectral centroid (X2(1) = 10.99, p<0.001) and std-HNR (X2(1) = 6.69, p<0.01) remained significant predictors of M’s performance.

One possibility that arose is that, given his intensive practice on speech and language tasks after his infarct, M found it easier to identify those environmental sounds that are acoustically more similar to speech. To investigate this, we calculated the same acoustical measures on a set of 59 words and short phrases (spoken by a male and female speaker) that were used in a previous study (Cummings et al., 2006). We then looked at which environmental sounds had spectral centroid and std-HNR values similar to speech sounds (i.e., within one standard deviation of the mean). Chi-square tests comparing M’s performance on those sounds that did or did not closely overlap with speech showed significant differences for the spectral centroid (X2(1) = 5.36, p<0.05), and std-HNR (X2(1) = 6.95, p<0.01), showing that M found environmental sounds easier to identify when they were more speech-like in terms of their spectral weighting and change in harmonicity.


For M, speech sounds activated a continuous region in his intact right hemisphere, including Heschl’s gyrus and the surrounding superior temporal gyrus, and sulcus. There was also a cluster of perilesional activity for speech sounds in the left anterior superior temporal gyrus. These responses were diminished for environmental sounds, which evoked a significant response above silence only in early auditory cortical regions in the right hemisphere.

The comparison of the two sound conditions revealed two clusters of activation in which M’s activity for speech sounds was significantly greater than that for environmental sounds (p<0.05, with FDR correction and a minimum cluster size of 10 voxels). One of these clusters lay on the left anterior superior temporal gyrus, with the other, larger cluster in the right superior temporal sulcus (shown in Figure 3A and 3B, along with plots of the effect sizes at the peak voxels). M had no brain regions where environmental sounds evoked stronger activity.

Figure 3
FMRI Results. Sagittal sections through M’s brain showing the two clusters in which activation for speech sounds was significantly greater than activation for environmental sounds – our regions of interest (ROIs). There was a cluster in ...

These data contrasted markedly with those of healthy controls, who show only subtle differences in brain activity for the two types of sounds (Dick et al., 2007). In the present data, the statistical comparison of the sound types did not reveal any significant differences for the control subjects. This might be due to insufficient power (since we had a relatively small amount of fMRI data), as well as the passive task. However, even restricting the analysis to the very same locations where M showed significantly greater activity for speech sounds, no differential response was present in the normal controls. (Fig. 3C and 3E). The reason for the different response profile in Patient M was unlikely to be due simply to ageing: The two older controls showed slightly lower effect sizes than the group of young controls (not significant), but also did not show greater response to speech sounds in these regions (Fig. 3D and 3F).

Further quantifying these ROI data, Figure 4 shows the difference between the effect sizes (Speech - Environmental sounds) from all participants. In both ROIs (where M shows significant preference for speech), controls showed no preference for speech sounds. Moreover, M’s responses were found to lie well outside of the 95% confidence interval of the controls’ in both ROIs. In contrast, the age-matched older participants’ effect sizes fell within the 95% confidence intervals of the younger controls’ data.

Figure 4
The difference between the effect sizes for the two conditions is on the x axis (Speech - Environmental sounds) with the data from the left (A) and right (B) hemisphere regions of interest (ROI) depicted for all participants. The open square denotes the ...

To summarize, M’s brain responses to environmental sounds were greatly reduced compared with speech sounds. Both right (contralesional) temporal auditory regions and left (perilesional) anterior temporal cortex showed strong responses to speech sounds but not to environmental sounds. These portions of temporal cortex did not show differential activation for these stimuli in the healthy brain, either in the current study or in previous studies (Dick et al., 2007). This pattern of activity suggests that M’s brain has undergone considerable functional reorganization both contra- and ipsilateral to the lesion. These changes are likely to have supported the selective recovery of M’s speech comprehension.


Patient M suffered a left hemisphere infarct causing damage to posterior temporal and parietal cortical regions that are known to be important for language comprehension. Somewhat paradoxically, he presented with severe and chronic impairment restricted to the comprehension of nonverbal as opposed to verbal material. The deficit appeared to affect nonverbal sounds exclusively, with the patient exhibiting no impairments in tasks of speech comprehension.

With the data at hand, combined with qualitative observations, we suggest that M has associative auditory agnosia that paradoxically spared speech comprehension, but severely affected environmental sound processing. Although not extensively tested, we observed that M did not exhibit signs for amusia, and at least gross sound localization was intact. It would have been ideal to administer formal assessments of amusia. Unfortunately, we are no longer able to test the patient on additional experiments. Similarly, we were not able to administer tests of apperceptive agnosia, so we cannot definitively conclude that M’s case is a pure case of associative agnosia.

While a role for the left hemisphere in processing non-linguistic material, more specifically environmental sounds, is increasingly reported (Dick et al., 2007; Lewis et al., 2004; Saygin, Dick, Wilson, Dronkers, & Bates, 2003), the specific sparing of linguistic comprehension in this patient is unusual and unexpected. To our knowledge there has been one report of a left hemisphere patient with impaired performance in environmental sound processing with spared verbal comprehension. In a study focused on dissociations between sound comprehension and sound localization (Clarke, Bellmann, Meuli, Assal, & Steck, 2000), patient SD exhibited compromised environmental sound comprehension (correct on 40/50 items, corresponding to z=−2.76 compared with controls) but normal verbal comprehension on the Token test. However, SD’s performance is not necessarily unusual: 9 out of 29 patients scored z<−2.76 on environmental sound comprehension in our group of left hemisphere patients, and 5 of these showed little or no comprehension problems in aphasia batteries (Saygin, Dick, Wilson, Dronkers, & Bates, 2003). However, in the same study, we also showed that when assessed using the same task and carefully matched speech and sound stimuli, comprehension impairments in verbal and nonverbal comprehension did not dissociate, but were tightly correlated (see also (Schnider, Benson, Alexander, & Schnider-Klaus, 1994; Spinnler & Vignolo, 1966; Varney, 1980; Vignolo, 1982, 2003)). The single exception to this pattern was Patient M (Fig. 2). In this context, M is unique in the literature because his auditory deficit was demonstrably atypical, severe, chronic, and highly specific to nonverbal environmental sound identification even when controlling for task specific effects, and when compared to other left hemisphere patients.

We have ruled out problems in peripheral auditory processing as an explanation of M’s deficits via an audiological assessment. Potential problems in speech output, naming, or labeling also cannot account for M’s behavioral profile: as described above, he not only showed problems naming environmental sounds, but also exhibited compromised performance in processing environmental sounds (but not speech) in a receptive task that does not involve naming or labeling (Dick, Bussiere, & Saygin, 2002; Saygin, Dick, Wilson, Dronkers, & Bates, 2003). Secondly, M fluently and firmly stated that he did not recognize over half of the sounds (usually “no”, “I don’t know”, or “that was too quick”). Furthermore, M’s speech output has been assessed in other studies by our group (Arevalo, Moineau, Saygin, Ludy, & Bates, 2005; Borovsky, Saygin, Bates, & Dronkers, 2007), and he is known to be among the least impaired of our patients. Qualitatively, M exhibited no hearing or verbal comprehension problems on the days of testing. He conversed intelligently about a number of topics and only rarely had a word-finding problem. His communication was smooth and fluent both in person and on the telephone. He did not exhibit deficits recognizing voices. M once even noticed the experimenter’s (APS) slight foreign accent and asked about its origins.

Finally, M’s performance is also unlikely to be due to poor attention or motivation since he did not show a general tendency to perform poorly in experiments administered in our laboratory. M was always motivated, alert, and pleased to participate in our studies. He did not perform notably different from other patients (and in many cases, from healthy controls) in various other tasks administered by our group at around the same time as the present tests, such as picture naming, word repetition, word reading, auditory word comprehension, reading comprehension and pantomime interpretation (e.g., see Arevalo, Moineau, Saygin, Ludy, & Bates, 2005; Moineau, Dronkers, & Bates, 2005; Saygin, Wilson, Dronkers, & Bates, 2004). The one exception to this rule is that M showed noticeably poor performance in biological motion perception (Saygin, 2007); unfortunately we are unable to follow up with further tests in order to determine whether this deficit has any relation to the environmental sounds agnosia that is the focus of the current study.

M performed best in recognizing environmental sounds that lack a high center of spectral gravity and that show greater temporal variation in their harmonics, acoustical characteristics associated with speech. It is possible that M relied on his speech processing resources to help process some environmental sounds. While we fully recognize that correlation does not imply causation, we suggest that these data raise the intriguing possibility that M is making use of the temporal variation in the harmonics of those sounds that are in the speech range in processing some environmental sounds.

While the interpretation of fMRI data in stroke recovery remains challenging, the present data are consistent with the existing literature, where both contralateral and perilesional cortex have been implicated in aspects of recovery from aphasia (for reviews, see Crinion & Leff, 2007; Crosson et al., 2007). It is still striking to observe the extent of M’s neural reorganization, and how focused this is on language, possibly to the exclusion of environmental sound processing (Figs. (Figs.33 and and4).4). For M, in addition to perilesional temporal cortex, areas contralateral to the lesion that are not known to be preferentially engaged by speech sounds (Dick et al., 2007; Price, Thierry, & Griffiths, 2005) were activated very strongly by speech but not by environmental sounds. It appears that M’s recovery involved substantial cortical reorganization of auditory processing, either with an upregulation of processing of speech, or a downregulation of environmental sounds, and possibly a combination of both.

These acoustical and neuroimaging results suggest a possible etiology of M’s unusual profile: The dissociation he exhibited between language and environmental sound comprehension may not be the result of injury per se, but rather the outcome of the successful recovery of language processing following the neurological insult that initially affected his processing of both speech and non-speech sounds. Of course, for everyday living, language is more important than environmental sounds discrimination – so it is possible that following his stroke, M’s auditory computational resources have been recruited for the most important function (i.e., language), possibly inhibiting relearning of environmental sounds.

In sum, we report that persistent and severe deficits restricted to nonverbal auditory comprehension can follow a unilateral lesion to the left hemisphere, more specifically, involving the classical Wernicke’s area. This selective deficit observed for nonverbal material appears to be a result of functional reorganization, possibly in response to extensive speech training.

Supplementary Material



We thank E. Bates and N. Dronkers for their invaluable feedback and support; S. Moineau and M. Sereno for their help in administering the experiments; R. Walsh and J. Crinion for helpful comments. This research was supported by National Institutes of Health (NIDCD RO1 DC00216) and National Science Foundation (BCS 0224321) grants. Additionally, APS was supported by European Commission Marie Curie Award FP6-025044; and RL and FD by Medical Research Council Award G0400341.


Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.


  • Albert ML, Sparks R, Stockert TV, Sax D. A case study of auditory agnosia: linguistic and non-linguistic processing. Cortex. 1972;8:427–443. [PubMed]
  • Arevalo A, Moineau S, Saygin AP, Ludy C, Bates E. In search of Noun-Verb dissociations in aphasia across three processing tasks. Center for Research in Language Newsletter. 2005;17(1):3–17.
  • Ballas JA. Common factors in the identification of an assortment of brief everyday sounds. Journal of Experimental Psychology: Human Perception and Performance. 1993;19:250–267. [PubMed]
  • Bates E, Appelbaum M, Salcedo J, Saygin AP, Pizzamiglio L. Quantifying dissociations in neuropsychological research. Journal of Clinical and Experimental Neuropsychology. 2003;25(8):1128–1153. [PubMed]
  • Borovsky A, Saygin AP, Bates E, Dronkers N. Lesion correlates of conversational speech production deficits. Neuropsychologia. 2007 [PubMed]
  • Clarke S, Bellmann A, Meuli RA, Assal G, Steck AJ. Auditory agnosia and auditory spatial deficits following left hemispheric lesions: evidence for distinct processing pathways. Neuropsychologia. 2000;38:797–807. [PubMed]
  • Crinion JT, Leff AP. Recovery and treatment of aphasia after stroke: functional imaging studies. Current Opinion in Neurology. 2007;20:667–673. [PubMed]
  • Crosson B, McGregor K, Gopinath KS, Conway TW, Benjamin M, Chang YL, et al. Functional MRI of language in aphasia: a review of the literature and the methodological challenges. Neuropsychological Review. 2007;17:157–177. [PMC free article] [PubMed]
  • Cummings A, Ceponiene R, Koyama A, Saygin AP, Townsend J, Dick F. Auditory semantic networks for words and natural sounds. Brain Research. 2006;1115:92–107. [PubMed]
  • Cummings A, Saygin AP, Bates E, Dick F. Infants’ recognition of meaningful verbal and nonverbal sounds. Language Learning and Development. 2009;5:172–190. [PMC free article] [PubMed]
  • Dick F, Bussiere J, Saygin AP. The effects of linguistic mediation on the identification of environmental sounds. Center for Research in Language Newsletter. 2002;14(3):3–9.
  • Dick F, Saygin AP, Galati G, Pitzalis S, Bentrovato S, D’Amico S, et al. What is involved and what is necessary for complex linguistic and nonlinguistic auditory processing: Evidence from functional magnetic resonance imaging and lesion data. Journal of Cognitive Neuroscience. 2007;19(5):799–816. [PubMed]
  • Dick F, Saygin AP, Paulsen C, Trauner D, Bates E. The co-development of environmental sound and language comprehension in school-age children; Paper presented at the Symposium on Attention and Performance XXI: Processes of Change in Brain and Cognitive Development; Iron Horse, Colorado, USA. 2004.
  • Fujii T, Fukatsu R, Watabe S, Ohnuma A, Teramura K, Kimura I, et al. Auditory sound agnosia without aphasia following a right temporal lobe lesion. Cortex. 1990;26:263–268. [PubMed]
  • Griffiths T. Central auditory processing disorders. Current Opinion in Neurology. 2002;15:31–33. [PubMed]
  • Gygi B, Kidd GR, Watson CS. Spectral-temporal factors in the identification of environmental sounds. Journal of the Acoustical Society of America. 2004;115:1252–1265. [PubMed]
  • Gygi B, Kidd GR, Watson CS. Similarity and categorization of environmental sounds. Perception and Psychophysics. 2007;69:839–855. [PubMed]
  • Kaga K, Shindo M, Tanaka Y, Haebara H. Neuropathology of auditory agnosia following bilateral temporal lobe lesions: a case study. Acta Otolaryngologica. 2000;120:259–262. [PubMed]
  • Kazui S, Naritomi H, Sawada T, Inoue N, Okuda J. Subcortical auditory agnosia. Brain and Language. 1990;38:476–487. [PubMed]
  • Kertesz A. Aphasia and associated disorders: taxonomy, localization, and recovery. Grune & Stratton; New York: 1979.
  • Lewis JW, Wightman FL, Brefczynski JA, Phinney RE, Binder JR, DeYoe EA. Human brain regions involved in recognizing environmental sounds. Cerebral Cortex. 2004;14:1008–1021. [PubMed]
  • Moineau S, Dronkers NF, Bates E. Exploring the processing continuum of single-word comprehension in aphasia. Journal of Speech Language and Hearing Research. 2005;48(4):884–896. [PubMed]
  • Price C, Thierry G, Griffiths T. Speech-specific auditory processing: where is it? Trends in Cognitve Science. 2005;9:271–276. [PubMed]
  • Saygin AP. Superior temporal and premotor brain areas necessary for biological motion perception. Brain. 2007;130:2452–2461. [PubMed]
  • Saygin AP, Dick F, Bates E. An online task for contrasting auditory processing in the verbal and nonverbal domains and norms for younger and older adults. Behavior Research Methods. 2005;37:99–110. [PubMed]
  • Saygin AP, Dick F, Wilson SM, Dronkers NF, Bates E. Neural resources for processing language and environmental sounds: evidence from aphasia. Brain. 2003;126:928–945. [PubMed]
  • Saygin AP, Wilson SM, Dronkers NF, Bates E. Action comprehension in aphasia: linguistic and non-linguistic deficits and their lesion correlates. Neuropsychologia. 2004;42(13):1788–1804. [PubMed]
  • Schnider A, Benson F, Alexander DN, Schnider-Klaus A. Nonverbal environmental sound recognition after unilateral hemispheric stroke. Brain. 1994;117:281–287. [PubMed]
  • Spinnler H, Vignolo LA. Impaired recognition of meaningful sounds in aphasia. Cortex. 1966;2:337–348.
  • Spreen O, Benton AL, Fincham RW. Auditory agnosia without aphasia. Archives of Neurology. 1965;13:84–92. [PubMed]
  • Varney N. Sound recognition in relation to aural language comprehension in aphasic patients. Journal of Neurology Neurosurgery and Psychiatry. 1980;43:71–75. [PMC free article] [PubMed]
  • Vignolo L. Auditory agnosia. Philosophical Transactions of the Royal Society London B. 1982;298:49–57. [PubMed]
  • Vignolo L. Music agnosia and auditory agnosia: Dissociations in stroke patients. Annals of NY Academy of Sciences. 2003;999:50–57. [PubMed]