|Home | About | Journals | Submit | Contact Us | Français|
Author contributions: C.M.K., M.W.D., and H.J.N. designed research; C.M.K. and M.W.D. performed research; C.M.K. and M.W.D. analyzed data; C.M.K. and H.J.N. wrote the paper.
The developing brain responds to the environment by using statistical correlations in input to guide functional and structural changes—that is, the brain displays neuroplasticity. Experience shapes brain development throughout life, but neuroplasticity is variable from one brain system to another. How does the early loss of a sensory modality affect this complex process? We examined cross-modal neuroplasticity in anatomically defined subregions of Heschl's gyrus, the site of human primary auditory cortex, in congenitally deaf humans by measuring the fMRI signal change in response to spatially coregistered visual, somatosensory, and bimodal stimuli. In the deaf Heschl's gyrus, signal change was greater for somatosensory and bimodal stimuli than that of hearing participants. Visual responses in Heschl's gyrus, larger in deaf than hearing, were smaller than those elicited by somatosensory stimulation. In contrast to Heschl's gyrus, in the superior-temporal cortex visual signal was comparable to somatosensory signal. In addition, deaf adults perceived bimodal stimuli differently; in contrast to hearing adults, they were susceptible to a double-flash visual illusion induced by two touches to the face. Somatosensory and bimodal signal change in rostrolateral Heschl's gyrus predicted the strength of the visual illusion in the deaf adults in line with the interpretation that the illusion is a functional consequence of the altered cross-modal organization observed in deaf auditory cortex. Our results demonstrate that congenital and profound deafness alters how vision and somatosensation are processed in primary auditory cortex.
A central goal of research on neuroplasticity is to understand the interacting roles of genetic and experiential factors in sculpting brain function and structure. Brain development can be characterized as the gradual unfolding of a powerful, self-organizing network of processes with complex interactions between genes and environment (Johnson, 2001; Johnson and Munakata, 2005). In this context, cross-modal neuroplasticity refers to sensory-specific cortex adapting to respond to an alternative sensory modality after a prolonged absence of the default sensory modality. There are limits to brain plasticity (Bavelier and Neville, 2002; Stevens and Neville, 2006), and little is known about plasticity of the primary auditory cortex in congenitally deaf individuals.
Cross-modal neuroplasticity within auditory cortex is an important area of active research. While some animal studies indicate that both vision and somatosensation play a role in altered cross-modal organization of primary auditory cortex or closely related regions (Allman et al., 2009; Meredith et al., 2009; Lomber et al., 2010; Meredith and Lomber, 2011), other studies report deaf primary auditory cortex does not respond to vision (Kral et al., 2003). Comparisons of auditory cortex across different species suggest the existence of common principles of auditory cortical organization across mammals (Woods et al., 2010; Hackett, 2011), but a one-to-one relationship between auditory cortical areas in humans and different animal models has not been established, complicating comparisons across humans and animal models.
Evidence of cross-modal neuroplasticity in human primary auditory cortex is limited. To our knowledge, no study to date has used a precise anatomical delineation of Heschl's gyrus, the site of human primary auditory cortex, to measure cross-modal neuroplasticity of primary auditory cortex. In past fMRI studies of deaf adults (Finney et al., 2001), standard practice was to use coordinates from the Talairach atlas (Talairach and Tournoux, 1988), a histological atlas based on a single elderly human brain, to localize primary auditory cortex; even so, most visual studies report visual responses caudal to rather than overlapping Heschl's gyrus (Bavelier et al., 2006). The somatosensory modality has not been extensively studied. There are only two human neuroimaging studies reporting somatosensory responses in auditory cortex in congenitally deaf participants. One is a magnetoencephalography (MEG) study of one deaf person showing a source in auditory cortex responded to vibrotactile stimulation (Levänen et al., 1998). The second study, a vibrotactile fMRI study of six deaf adults with extensive hearing aid use, was analyzed as a spatial average, and responses to vibrotactile stimulation occurred in deaf Heschl's gyrus (Auer et al., 2007). Both studies provide evidence that deaf auditory cortex may respond to somatosensory stimulation, but they are limited in anatomical precision. A third study, an MEG and fMRI study of a single congenitally deaf individual, found neither visual nor somatosensory responses in deaf auditory cortex (Hickock et al., 1997).
We examined whether visual, somatosensory, and bimodal processing is altered in congenitally deaf adult humans in a group analysis and by quantifying fMRI signal change within superior-temporal cortex and Heschl's gyrus subregions. An additional critical question is whether cross-modal neuroplasticity has functional consequences such as altered perception in deaf individuals. Only the congenitally deaf adults in our study reported a somatosensory double-flash illusion, a visual percept induced by a somatosensory stimulus, and we examined which regions of auditory cortex had signal that correlated with the response rate to the illusion across deaf participants.
All participants were healthy, were not taking psychoactive medications, and did not have a history of neurological or psychiatric conditions. Procedures were approved by the Institutional Review Board of the University of Oregon. Participants gave their informed and written consent and were paid for their participation.
Thirteen congenitally deaf adults (mean age, 27.7; range, 20–40 years) were recruited via deaf community organizations and electronic bulletin boards (10 were female). Participants were profoundly deaf in both ears since birth (bilateral attenuation >90 dB), had minimal past hearing aid use, had a family history of congenital deafness, and reported learning American Sign Language (ASL) to fluency in childhood. Consent was acquired via a written consent form, with a certified ASL interpreter present throughout the experiment. We assessed nonverbal reasoning using a timed (30 min) 12-question short form of Raven's Advanced Progressive Matrices (Bors and Stokes, 1998); scores ranged from 2 to 9, with an average score of 5.2.
Twelve hearing adults (mean age, 30.8; range, 19–48 years) were recruited via community electronic bulletin boards (7 were female). Raven's Advanced Progressive Matrices for hearing adults scores ranged from 0 to 12, with an average score of 7.8. Neither age (t(23) = −1.1, p = 0.3) nor Raven's score (t(20) = −2, p = 0.06) was significantly different between groups.
An fMRI-compatible apparatus was constructed to allow precise positioning of somatosensory and visual stimuli for each participant in the MRI scanner (Fig. 1a). The somatosensory component of the device was based on previous somatosensory studies (Huang and Sereno, 2007; Smith et al., 2009). Somatosensory stimuli were “air puffs” above and below the right eye delivered via flexible tubing connected to high-speed solenoids with an adjustable-flow valve (NVKF334; SMC Pneumatics). A compressor provided pressurized air to the solenoids, which can be operated at frequencies of up to 10 Hz by an NI DAQCard and in-house software (PCI-6229 and LabView software; National Instruments). Solenoids and the compressor were in a separate room, with tubing entering the scanning room through a wall portal (waveguide). Inside the MRI bore, the flexible tubing (one-quarter inch inner diameter, one-sixteenth inch wall) was connected to a jointed rig of stiff plastic tubing mounted on the MRI-compatible headset. The jointed rig was adjusted for each participant to deliver air puffs above the right eyebrow and to the cheek or below the right eye. The nozzle was located 0.25 cm from the skin. The air puffs were angled away from the eye and did not cause any blinking or drying of the eye. Visual stimuli, “lights,” were delivered via fiberoptic cables with a 3 mm diameter diffuser mounted directly below the air puff nozzle, connected to red light-emitting diodes in the console room and controlled by the LabView program and the NI DAQCard. Lights were positioned individually for each subject to be at a radial location of 45° above or below the horizontal meridian and at 45° eccentricity to the right of the vertical meridian. Bimodal stimuli were the simultaneous presentations of the visual and somatosensory stimuli delivered from the same location on the apparatus (air puffs to the cheek with visual stimuli below the horizontal meridian, air puffs above the eyebrow with visual stimuli above the meridian). The distance between the tip of the air puff tubing and the location of the light source was 8 mm. In accordance with safety protocols for the MRI center, both hearing and deaf individuals wore both sound-attenuating earplugs and a sound-attenuating headset. Before initiating the scanning session, we verified that participants were able to easily detect all standard stimuli from within the MRI bore and made minor adjustments to the position of the apparatus until participants reported a subjective match between the strength of visual and somatosensory standards. A projector positioned at the back of the MRI bore projected a video display of a central fixation cross on a black background and an instruction cue, a letter, via Presentation software (Neurobehavioral Systems). The brightness of the projector was decreased to 50%. Participants viewed the projection with a custom mirror that did not interfere with the visual-somatosensory apparatus.
We monitored participants from the console room via an infrared video camera and used an eye-tracker to monitor central fixation, blinking, and participant alertness. For the deaf participants, a video camera displayed the console room and the ASL interpreter between scans to allow deaf participants to communicate with research staff; the hearing participants used an intercom. Participants had access to a “squeeze-ball” alert button to terminate a scan.
Blocks were 20 to 24 s duration. Each run consisted of 16 randomly ordered “task blocks” with central fixation interspersed with 5 blocks with central fixation in which participants were instructed to rest and be ready for the next block of task trials, “resting fixation blocks.” The total duration of each run was 5 to 5.5 min. A total of 8 randomized runs were available for the experiment. Runs were excluded for drowsiness or excessive motion or were omitted if time did not allow for the full 8 runs. For one deaf and one hearing participant, the first run of the session included saccades; fixation instructions to the participant were then repeated, and the first run was excluded. There was no significant difference in the number of runs analyzed for deaf and hearing groups (hearing, 7.1 runs; deaf, 6.4 runs; t(23) = 1.37, p = 0.18), and each participant completed at least 4 runs. A sample run is depicted in Table 1. Visual stimuli were lights in the peripheral visual field (45°), and somatosensory stimuli were air puffs to the face; both were presented via a custom fiberoptic/pneumatic apparatus (Fig. 1a). Stimuli were presented in blocks of visual stimuli alone (Unimodal Visual), somatosensory stimuli alone (Unimodal Somatosensory), or simultaneous visual-somatosensory “bimodal” stimuli with attention to either visual or somatosensory stimuli in separate blocks (Bimodal Visual Attention or Bimodal Somatosensory Attention) (Fig. 1b). Each block consisted of stimulation to only one stimulus field, either the upper or the lower. Participants detected target stimuli that were 20% of the stimuli in each block. Targets were “double lights” (two brief flashes of light) or “double puffs” (two brief air puffs). Standard light flashes were duration 300 ms while infrequent targets, presented 20% of the time, had a 40 ms gap centered at 150 ms. Standard air puffs were also duration 300 ms while infrequent targets had a 120 ms gap centered at 150 ms. The interval between stimuli was 700–900 ms. A cue letter at fixation, present during the entire block, instructed participants to attend to and detect targets in either the visual or the somatosensory modality or to rest with eyes at the fixation cross while there were no additional stimuli presented. The result was a crossed design in which stimuli were either unimodal or bimodal with a target-detection attention task for either the visual or the somatosensory modality.
Our main behavioral measures were the overall response rate, or hit rate, to true targets in Unimodal Visual and Somatosensory blocks, the percentage of responses to illusory bimodal targets in Bimodal Visual Attention blocks, the percentage of response to control bimodal targets in Bimodal Somatosensory Attention blocks, and the false alarm rate, the percentage of responses to nontargets excluding the illusory and control stimuli.
The auditory-induced double-flash, a phenomenon wherein a single flash of light paired with two or more brief auditory events is perceived as multiple flashes of light (Shams et al., 2000; Mishra et al., 2007), has also been reported for the somatosensory modality (Violentyev et al., 2005), which correlates with activity in visual and somatosensory areas (Lange et al., 2011). In our paradigm (Fig. 1b), double lights and double puffs were the target stimuli to which participants responded. In a Bimodal Visual Attention block, a single light paired with a double puff is not a target; however, if a participant is susceptible to the touch-induced double-flash illusion, this single flash will be perceived as the target “double flash.” Thus the response rate to these nontarget stimuli indicates the strength of this illusory percept. To ensure that an increased response rate to these targets did not simply indicate an increased response rate in the presence of competing stimuli, we included a “nonillusory control” condition—double lights paired with single puffs—in the Bimodal Somatosensory Attention blocks, since double lights do not induce an illusory percept of two sounds or touches. This asymmetry in double-flash illusions is often interpreted as the sensory modality with greater temporal precision (e.g., auditory) influencing the timing of a less precise modality (vision) (for review, see Shams et al., 2004). We reasoned that if the deaf auditory cortex processes somatosensory and bimodal visual-somatosensory processing to a greater degree than that of hearing people, the deaf participants would perceive a stronger touch-induced double-flash illusion but would not differ in the nonillusory control condition.
For the main behavioral analysis of the illusion, we performed repeated measures ANOVA on the percentage of responses to the illusion and control conditions [2 Conditions (Visual Bimodal [Illusion] and Somatosensory Bimodal [Control]) × 2 Groups (Deaf and Hearing)] with an α level of 0.05 and group as a between-subjects factor. In a separate analysis, we tested whether hit rates for nonillusory true targets differed by condition or group. We performed a repeated measures ANOVA on the percentage of responses to true targets [2 Attention Modes (Visual and Somatosensory × 2 Stimulus Types (Unimodal and Bimodal) × 2 Groups (Deaf and Hearing)] with group as a between-subjects factor. We used the same ANOVA structure to test for differences by group or condition for false alarm. Pearson's correlations were calculated between the signal change in each region of interest (ROI) (all voxels in the region, not pre-thresholded) and the difference between the response rates to illusory and control stimuli separately by group.
We used a 3 T Siemens Allegra head-only MRI system to collect whole-brain gradient echo EPI images. The TR was 2 s, TE was 30 ms, and flip-angle was 80°. We collected 32 axial slices with a thickness of 3.125 mm with interleaved acquisition order. We used a Siemens Prospective Acquisition Correction (PACE) protocol to compensate for head motion in real time before the acquisition of each whole brain image. We excluded any run in which there was significant motion not accounted for by PACE. fMRI data processing was carried out using FEAT version 5.98, part of FSL (Smith et al., 2004) (FMRI Expert Analysis Tool; FMRIB Software Library, www.fmrib.ox.ac.uk/fsl). The following pre-statistics processing was applied: slice-timing correction; spatial smoothing using a Gaussian kernel of FWHM 4 mm; grand mean intensity normalization; high-pass temporal filtering (cutoff = 0.0125 Hz). The task-related regressor was modeled as boxcars convolved with the FSL default canonical hemodynamic response function. Separate boxcars represented visual and somatosensory blocks and interaction terms for bimodal blocks (Table 1). Time-series statistical analysis using the GLM was carried out using FILM with local autocorrelation correction (Woolrich et al., 2001). Functional images were coregistered to each individual participant's T1-weighted structural image, which was then coregistered and resampled to the FSL standard 2 × 2 × 2 mm brain (MNI/ICBM 152 template (Mazziotta et al., 2001) using FLIRT with 12 degrees of freedom (Jenkinson and Smith, 2001; Jenkinson et al., 2002). Higher-level analysis was carried out using FLAME (FMRIB Local Analysis of Mixed Effects) stage 1 and stage 2 (Beckmann et al., 2003; Woolrich et al., 2004). For group analyses, individual structural and functional volumes were coregistered to a common stereotaxic template. Z (Gaussianized T/F) statistical images were initially thresholded using clusters determined by Z > 2.8 and were further corrected by cluster significance of p < 0.01 for comparisons across deaf and hearing groups (Worsley, 2001). The coordinates of local peaks within significant clusters were reported relative to the Jülich and Harvard-Oxford atlases in FSL. The superior-temporal region of interest (ROI) was based on the Harvard-Oxford Atlas (25% threshold) and included the anterior and posterior superior-temporal gyrus, and planum temporale.
There is considerable variability in individual Heschl's gyri. For example, for individual brains coregistered to a template brain, Heschl's gyrus in one person may partially overlap the inferior frontal gyrus of another person. Other nonlinear coregistration methods may not adequately coregister a single gyrus morphology of Heschl's gyrus in one participant to the partial or complete duplication of Heschl's gyrus in another. For this reason, we defined Heschl's gyrus anatomically in individuals. To further illustrate individual variability, Figure 2a shows the probabilistic location of primary auditory cortex in the Jülich histological atlas, an atlas based on the microscopic and quantitative histological examination of 10 human postmortem brains, 3D reconstructed and linearly transformed into MNI152 space (Eickhoff et al., 2005, 2006, 2007). The location of primary auditory cortex is shown thresholded at 30% probability (3 of 10 brains). The overlay is a digitized version of the Talairach atlas location of area 41 (primary auditory cortex) registered to MNI152 space (Talairach and Tournoux, 1988; Lancaster et al., 2000, 2007; Lacadie et al., 2008). Note that the Talairach atlas is based on a single elderly individual, and the location of the right posterior primary auditory cortex extends posterior to the probabilistic primary auditory cortex as defined by the Jülich histological atlas.
Blind to the group status of each participant's brain, we parcellated each Heschl's gyrus by hand on a structural volume (T1-weighted MPRAGE) coregistered and resampled to the 2 × 2 × 2 mm FSL standard brain (MNI 152); ROIs were drawn using Space software (http://lcni.uoregon.edu/~dow/Space_program.html). On sagittal planes, an initial boundary of Heschl's gyrus was drawn. These boundaries were projected onto coronal planes, and the boundaries were adjusted if the gyrus was visible in a cross section at either sagittal or coronal orientation. The boundaries were also checked in projection on the axial planes, where voxels with low neighborhood support were excluded and voxels with high neighborhood support were included. All voxels within the boundary were included. Figure 2b shows our anatomically defined Heschl's gyri fall within reasonable boundaries for primary auditory cortex, as defined by the Jülich histological atlas.
The individual Heschl's gyri were partitioned into anterior and posterior subdivisions by a plane oriented along the first principle component of voxel centers to allow comparison to recent tonotopic functional neuroimaging demonstrating that human primary cortical areas A1 and R respect anatomical boundaries of anterior and posterior Heschl's gyrus, respectively (Da Costa et al., 2011). In a second parcellation, we divided each individual Heschl's gyrus ROI into three subregions along its length—central, caudomedial, and rostrolateral—to approximate cytoarchitectonic divisions Te1.0, Te1.1, and Te1.2 respectively, corresponding to the human primary auditory cortex. The central region, Te1.0, is most granular with the thickest layer IV and the small-sized layer IIIc pyramidal cells; the medial area, Te1.1, has less distinct layers with medium-sized layer IIIc pyramidal cells; and the lateral area, Te1.2, has a thick layer III with medium-sized layer IIIc pyramidal cells (Morosan et al., 2001). The subregion divisions were the planes through one-third of the distance between the extreme caudomedial position along the first principle component axis. The parameter estimates from all unthresholded voxels within the boundary were extracted for each term in the model scaled to percentage signal change.
There were four experimental blocks: unimodal visual, unimodal somatosensory, bimodal visual-somatosensory with visual attention, and bimodal visual-somatosensory with somatosensory attention. The experiment was a 2 × 2 design. The manipulations were as follows: stimulus type, either a unimodal or a bimodal stimulus; attention modality, attention to the visual or somatosensory stimulus. The dependent measure was percentage signal change in the BOLD signal relative to the fixation baseline, extracted as the mean parameter estimate using Featquery in FSL; our GLM modeled the bimodal blocks as contributions from the visual and somatosensory modality and an interaction term (Table 1), which were summed for percentage signal change in bimodal blocks. Normal distribution of data for each variable was tested with a Kolmogorov-Smirnov test, and none violated the assumption of normality (p > 0.5). For the ROI analyses we performed an ANOVA [2 Stimulus Types (Unimodal, Bimodal) × Attention Modes (Visual, Somatosensory) × 2 Hemisphere (Contralateral, Ipsilateral)], with Group (Deaf, Hearing) as a between-subjects factor; Greenhouse-Geisser corrections were applied. Heschl's gyrus analysis also included subregion (Rostral, Central, Medial or Anterior, Posterior). All variables were repeated measures except group, which was a between-subjects factor. The p values for followup t test contrasts were multiplied by the number of comparisons to control type I error.
While group analyses spatially averaged across participants are less anatomically specific than individual ROI analyses, we performed a whole-brain group analysis in standard MNI152 stereotaxic space to allow our results to be compared to the existing literature. The statistical threshold was Z > 2.8, cluster corrected at two-tailed p < 0.01. There were no regions in which signal for the hearing adults was significantly greater than for the deaf adults and no differences between upper- and lower-field stimulation. Figure 3 illustrates the results of group level contrasts between deaf and hearing adults for visual and somatosensory unimodal and bimodal responses for all slices with a significant difference between groups. We found that deaf participants had larger visual and somatosensory responses than hearing participants in superior-temporal lobe regions. Signal in the anterior/rostral aspect of atlas-defined Heschl's was significantly larger in the deaf than the hearing for unimodal somatosensory stimulation, while unimodal visual stimulation only elicited greater signal in deaf than hearing in the posterior/caudal portion of atlas-defined Heschl's. The coordinates for local peaks within significant clusters with probabilistic atlas descriptions of their locations are summarized in Tables 2 and and3.3. To illustrate that the group effect does not depend on a few individuals, Figure 4 shows that among individual participants, hearing adults had little positive signal change in Heschl's gyrus, even at a low statistical threshold of Z > 1.65), in contrast to the majority of deaf participants. This qualitative result was quantified by extracting signal change estimates from individual ROIs.
Figure 5 illustrates the signal change in Heschl's gyrus anterior and posterior subregions for deaf and hearing participants. According to a recent tonotopic fMRI study (Da Costa et al., 2011), the anterior aspect of Heschl's likely corresponds mainly to human A1, while the posterior aspect corresponds to human area R. Both A1 and R are core auditory regions in the macaque monkey (Hackett, 2011). Deaf participants had a larger response across both regions (Main Effect of Group: F(1,23) = 29.9, p < 0.001). The difference between deaf and hearing was larger for somatosensory and bimodal stimuli than visual (Group × Stimulus: F(1,23) = 5.7, p = 0.025; Group × Attention: F(1,23) = 7.5, p = 0.012), and Group differences did not interact with hemisphere (p > 0.10). Group differences did interact with the Anterior/Posterior subdivisions (F(1,23) = 4.6, p = 0.042) and because hemisphere tended to interact with subregion (F(1,23) = 3.8, p = 0.06), we performed follow-up t test contrasts between deaf and hearing participants in the anterior (Ant) and posterior (Post) subregions separately in the contralateral (Contra) and ipsilateral (Ipsi) hemispheres for a total of 16 contrasts, corrected for multiple comparisons. Deaf participants had larger responses than hearing participants in each subregion for each condition with the exception of unimodal vision, which was significantly different between groups only in the contralateral posterior subregion [Unimodal Somatosensory (Ant Contra: t(23) = 5.2, p < 0.01; Post Contra: t(23) = 5.4, p < 0.01; Ant Ipsi: t(23) = 3.7, p = 0.017; Post Ipsi: t(23) = 5.1, p < 0.01), Bimodal Somatosensory Attention (Ant Contra: t(23) = 5.0, p < 0.01; Post Contra: t(23) = 4.3, p < 0.01; Ant Ipsi: t(23) = 4.9, p < 0.01; Post Ipsi: t(23) = 5.7, p < 0.01), Bimodal Visual Attention (Ant Contra: t(23) = 4.1, p < 0.01; Post Contra: t(23) = 4.5, p < 0.01; Ant Ipsi: t(23) = 3.4, p = 0.04; Post Ipsi: t(23) = 4.7, p < 0.01), and Unimodal Visual Stimulation (Ant Contra: t(23) = 2.6, p = 0.26; Post Contra: t(23) = 3.6, p = 0.023; Ant Ipsi: t(23) = 1.6, p > 1; Post Ipsi: t(23) = 2.6, p = 0.25)].
Figure 6 illustrates the signal change in Heschl's gyrus subregions—caudomedial, central, and rostrolateral—for deaf and hearing participants to approximate cytoarchitectonic regions Te1.1, Te1.0, and Te1.2 (Morosan et al., 2001). As shown in Figure 6d, the deaf had a larger response across all Heschl's gyrus regions (Group main effect: F(1,23) = 29.3, p < 0.001). The difference between deaf and hearing adults was larger for somatosensory and bimodal blocks than for visual blocks (Group × Stimulus: F(1,23) = 5.34, p = 0.03; Group × Attention: F(1,23) = 6.98, p = 0.015), but Group did not interact with subregion or hemisphere (p > 0.10). There was a main effect of Heschl's gyrus subregions with the largest signal, on average, in the rostrolateral region (Region Effect: F(2,46) = 4.39, p = 0.018). Bimodal stimuli elicited a larger response than unimodal stimuli across both deaf and hearing groups (Main Effect of Stimulus Type: F(1,23) = 36.8, p < 0.001). Since group status did not interact with subregion or hemisphere, we averaged across Heschl's gyrus regions to perform 10 follow-up t test contrasts, corrected for multiple comparisons. Deaf adults had larger responses than hearing adults to unimodal somatosensory stimulation (t(23) = 5.3, p < 0.01), bimodal stimulation with visual attention (t(23) = 4.6, p < 0.01), and bimodal stimulation with somatosensory attention (t(23) = 5.5, p < 0.01) and with a trend toward larger responses to unimodal visual stimulation (t(23) = 2.9, p = 0.08). Comparing conditions in the deaf participants, we found that somatosensory stimuli elicited larger activations than visual stimuli (t(12) = 4.76, p < 0.01), and bimodal visual signal was larger than unimodal visual signal (t(12) = 4.86, p < 0.01), but the contrasts between bimodal somatosensory and unimodal were not significant (t(12) = 2.3, p = 0.4). In the hearing, bimodal stimuli with somatosensory attention elicited larger signal than unimodal somatosensory stimuli (t(11) = 3.6, p = 0.04), but there was no difference between unimodal visual and unimodal somatosensory (t(11) = 0.44, p > 1) or unimodal visual and bimodal stimuli with visual attention (t(11) = 1.19, p > 1).
For comparison to Heschl's gyrus and to complement the group analysis, we performed a superior-temporal ROI analysis (Fig. 7). The deaf had a larger superior-temporal response overall (Group main effect: F(1,23) = 24.7, p < 0.001), and the increased signal for bimodal versus unimodal stimulation was larger in deaf than in hearing participants (Group × Stimulus: F(1,23) = 6.923, p = 0.015). There were no significant interactions between group and hemisphere or attention modality. Collapsing across hemisphere, we performed 10 follow-up contrasts corrected for multiple comparisons. Deaf participants had significantly greater superior-temporal signal for each condition (Unimodal Visual, t(23) = 5.0, p < 0.01; Unimodal Somatosensory, t(23) = 3.7, p < 0.01; Bimodal Visual Attention, t(23) = 4.8, p < 0.01; Bimodal Somatosensory Attention, (t(23) = 4.8, p < 0.01). Comparing conditions within the deaf participants, we found that contrasts between bimodal visual and unimodal visual stimuli were significant (t(12) = 4.68, p < 0.01) and tended toward significance for unimodal somatosensory stimuli versus bimodal (t(12) = 3.1, p = 0.09). In the hearing participants, the contrast between unimodal visual stimuli and bimodal was not significant (t(11) = 2.3, p > 0.10) but was significant for somatosensory stimuli versus bimodal (t(11) = 4.1, p = 0.02). Importantly, within each group, there was no difference between unimodal visual and unimodal somatosensory signal in the superior-temporal region for either the deaf (t(12) = 2.0, p > 0.65) or the hearing (t(11) = 1.72, p > 1) participants.
A central question is whether the altered organization of primary auditory cortex in the deaf is related to altered perception. We investigated this question with a somatosensory variant of the double-flash illusion (Fig. 8). Behavioral measures are reported in Tables 4 and and5.5. Hit rates for somatosensory attention (57% ± 3% SEM) overall were higher than for visual (37% ± 5% SEM, F(1,23) = 52.3, p < 0.001) and did not differ by or interact with group. False alarm rates to standard stimuli were low but were significantly higher for unimodal (4.0% ± 0.2% SEM) than for bimodal (3.2% ± 0.3 SEM) (F(1,23) = 10.8, p = 0.003). There tended to be fewer false alarms in the deaf group for somatosensory stimuli (Group × Attention modality interaction: F(1,23) = 4.2, p = 0.051). The hit and false alarm rates indicated that target detection was well above the perceptual threshold and the response criterion was conservative [somatosensory: d-prime 2.1, criterion (C) 0.88; visual, d-prime 1.6, C 1.1]. In the main comparison, illustrated in Figure 8c, deaf and hearing participants responded differently to the illusion and control stimuli (Condition × Group Interaction, F(1,21) = 9.08, p = 0.007). Deaf people responded on average to 37% (±7.8% SEM) of the illusory double-flash stimuli, similar to their response rate for true visual targets in bimodal blocks (36% ± 5% SEM). In contrast, the responses of hearing participants indicate they were not susceptible to the illusion; hearing people responded to only 12% (±1.8% SEM) of the illusory double-flash stimuli, while their response rate to true visual targets was 39% (±4% SEM). In the nonillusory control condition, deaf adults responded to 9.8% (±1.6%) and hearing adults responded to 10.8% (±1.7%) of these stimuli.
We expressed the illusion as the difference between the response rates to the illusory target and the nonillusory control and then tested whether this metric was predicted by the signal change in Heschl's gyrus subregions and the superior-temporal regions using Pearson's correlation. The signal change in rostral-contralateral Heschl's gyrus predicted the strength of the illusion in deaf participants for somatosensory and bimodal blocks, but not visual (Fig. 8). In other words, deaf participants whose Heschl's gyrus was more responsive for blocks with somatosensory stimulation (either unimodal or bimodal) had a stronger illusion effect. Correlations in other ROIs were not significant (p > 0.05).
We demonstrated that fMRI signal change in Heschl's gyrus—the site of human primary auditory cortex—responded to unimodal somatosensory stimuli in congenitally deaf adults but not in hearing adults. Bimodal stimuli elicited a larger response than unimodal stimuli in Heschl's gyrus in both deaf and hearing adults but represented a more robust increase from the fixation baseline only in deaf adults. In deaf Heschl's gyrus, visual responses were weaker than somatosensory or bimodal stimulation. For unimodal vision, group differences were only significant in the contralateral posterior subregion of Heschl's gyrus, a likely homolog of primate area R (Da Costa et al., 2011). In contrast to Heschl's gyrus, there was no difference between unimodal somatosensation and vision in the deaf superior-temporal region (auditory association and multisensory cortex). A key finding was that there were marked perceptual differences between deaf and hearing adults; deaf, but not hearing, adults were susceptible to an illusory percept of a double flash of light when a single flash was paired with a double touch to the face. The strength of the illusion was predicted by signal change in the contralateral rostral subregion of Heschl's gyrus (approximate Te1.2) (Morosan et al., 2001) in the deaf adults.
A limitation of previous studies of cross-modal neuroplasticity of auditory cortex in deaf humans is the spatial resolution afforded by the techniques that were used. Individual Heschl's gyri vary in morphology and position (Morosan et al., 2001; Da Costa et al., 2011), and analyses that spatially average across individual brains can result in activity from the planum temporale or somatosensory regions of the parietal operculum being misattributed to Heschl's gyrus or true activity in Heschl's gyrus being missed due to low spatial concordance across participants. To our knowledge, no previous study of congenitally deaf humans has used an ROI approach to identify Heschl's gyrus or has subdivided these regions to test whether cross-modal neuroplasticity differs in subregions approximating human primary auditory cortical areas. In past fMRI studies of deaf adults, such as in Finney et al. (2001), standard practice was to use Talairach coordinates, an atlas based on a single elderly brain, to localize primary auditory cortex. As we illustrate (Fig. 2), Talairach coordinates for primary auditory cortex are more posterior on the right than recent probabilistic atlases (Eickhoff et al., 2007). Even so, most studies of altered visual organization in deaf participants report cross-modal altered organization caudal to, rather than overlapping, the posterior aspect of Heschl's gyrus (Bavelier et al., 2006).
Our results suggest that cross-modal neuroplasticity in deaf primary auditory cortex is greater for the somatosensory than the visual modality. This could be explained by stimulus intensity differences between visual and somatosensory stimuli, but this is unlikely in our experiment. Although somatosensory targets were more readily distinguished from somatosensory standards than visual targets from visual standards, the standard stimuli (80% of the trials) were easily detected for both modalities, and in the superior-temporal region there was no significant difference between visual and somatosensory signal amplitudes. Another possible explanation for the disparity between modalities in primary auditory cortex could be that a different stimulus type, such as peripheral visual motion, is more suited to elicit responses in deaf primary auditory cortex. Future studies with more diverse visual stimulation and parametric manipulation of stimulus strength with anatomical ROIs are needed to definitively address this issue.
Our results demonstrating robust responses in Heschl's gyrus for the somatosensory modality are consistent with the two previous reports of somatosensory responses in the auditory cortex of deaf humans. Auer et al. (2007) reported somatosensory activation overlapping atlas-defined Heschl's gyrus in a deaf group relative to fixation, but ROI analysis was not performed making it difficult to determine whether responses were in primary auditory cortex. A magnetoencephalography (MEG) study using source modeling in a single elderly deaf person reported somatosensory responses were accounted for by a source in auditory cortex (Levänen et al., 1998) although the spatial precision of MEG is lower than that of fMRI, making more precise localization problematic. However, in a different MEG and fMRI study of a 28-year-old congenitally deaf man, no visual or somatosensory responses were found in auditory cortex (Hickock et al., 1997). In our sample of 13 congenitally deaf adults, there were individual differences in the cross-modal responses in deaf auditory cortex, and these differences were correlated with altered perception.
An interesting point to consider is whether group differences are influenced by qualitatively different experiences of background sounds in an MRI experiment. For example, the negative response of Heschl's gyrus in hearing participants could be elicited by overt attention to MRI scanner sounds during resting fixation. However, this interpretation is not supported by results in the superior-temporal region, which were positive or near zero for hearing participants. It seems unlikely that this auditory and multisensory region is less responsive with overt attention to the MRI scanner noise than primary auditory cortex. In addition, group differences in the resting fixation condition do not account for differential signal change between conditions. Unfortunately, MRI background sounds are inherent to the MRI technique and cannot be matched between deaf and hearing groups.
We found that the deaf participants perceived a somatosensory-induced double-flash illusion while hearing participants did not. The absence of any illusion in hearing participants is surprising in light of previous reports that have shown that this double-flash illusion may be observed for either auditory or somatosensory stimulation in hearing adults (Violentyev et al., 2005; Lange et al., 2011). This may be due to stimulus differences; our somatosensory stimuli were air puffs to the face and were spatially coregistered with the lights in the far visual periphery while previous studies used vibrotactile stimulation to the fingertips. We positioned the lights in the far periphery because previous studies have shown that visual enhancements in the deaf are strongest in the visual periphery (Neville and Lawson, 1987; Bavelier et al., 2000) and this factor, combined with increased deaf tactile sensitivity (Levänen and Hamdorf, 2001), may be what led to a robust illusory percept only in the deaf participants. Across deaf individuals, there was variability in the susceptibility to the illusion, and the response in rostrolateral Heschl's gyrus predicted the strength of the illusion in the deaf participants. This finding is consistent with our hypothesis that cross-modal neuroplasticity in primary auditory cortex contributes to altered perceptions in deaf people.
Notably, although somatosensory responses were robust in each subregion of Heschl's gyrus, it was the rostrolateral region (Te1.2) that predicted the strength of the somatosensory-induced double-flash illusion and had the largest overall signal change. The functional specialization of different regions of human primary auditory cortex is not currently known, but our results suggest that altered cross-modal organization of primary auditory cortex in deaf people is not uniform. In addition, while visual responses in deaf Heschl's gyrus were weak compared to somatosensory and bimodal responses, they were equal to somatosensory responses in the superior-temporal region. These findings are interesting in light of evidence from animal studies of visual-somatosensory cross-modal plasticity in auditory cortex. In cats, there are dissociations between auditory cortical regions; a core auditory area, the anterior auditory field (AAF), in cats responds to somatosensory stimulation (Meredith and Lomber, 2011) and shows different cross-modal responses than the auditory field of the anterior ectosylvian sulcus (fAES) (Meredith et al., 2011). In addition, multisensory visual-somatosensory neurons are prevalent in the primary auditory cortex of congenitally deaf mice (Hunt et al., 2006). Research addressing which human auditory areas are likely homologs of regions in animal models would allow for more direct comparisons across models.
An important question is how altered organization of the auditory cortex arises. One possibility is the developmental stabilization of cross-modal connections that occur even in typically developing individuals. Primate studies indicate that multisensory interactions between hearing and touch occur in the auditory cortex of hearing individuals (Kayser et al., 2005), with some very early somatosensory responses to median nerve electrical stimulation in primary auditory cortex (Lakatos et al., 2007). In addition, visual stimuli influence auditory cortex (Bizley et al., 2007; Kayser et al., 2007), and multisensory interactions occur in low-level auditory cortex (Musacchia and Schroeder, 2009). In our study, even the hearing participants had increased signal in Heschl's gyrus and superior-temporal cortex for bimodal stimuli relative to unimodal stimuli. If cross-modal connections are typical in the auditory cortex of hearing individuals, it is reasonable to speculate that these connections increase in number and strength when acoustic input is reduced. The receptive fields for these cross-modal inputs into deaf auditory cortex may be large and extend bilaterally (Meredith and Lomber, 2011; Meredith et al., 2011).
Future research using methods sensitive to the timing of multisensory responses in auditory cortex, such as EEG and MEG, may elucidate whether these signals occur early in the sensory processing hierarchy or are due to later feedback from other cortical areas (e.g., subcortical connectivity, corticocortical feedback, or feedforward pathways between primary cortices). Future studies using event-related designs or block designs with alternating rest (Kayser et al., 2005, 2007) could address whether time-series differ between regions and conditions. Another important question for future research is whether altered organization and altered perception have a sensitive period leading to different plasticity for individuals who become deaf later in childhood or adulthood and how it is affected by later reintroduction of auditory nerve input through cochlear implantation; for example, deafness in adulthood induces somatosensory conversion of ferret auditory cortex (Allman et al., 2009). It is important to understand how the age of onset of deafness, sign language learning, and degree of deafness influence cross-modal neuroplasticity of auditory cortex and perceptual changes such as the somatosensory-induced double-flash illusion. Together, our results highlight the central role of experiential factors in driving brain development and function, even at the level of primary sensory cortices, and have practical implications for educational and rehabilitative programs for both typically and nontypically developing individuals.
This research was supported by the National Institute of Deafness and Communication Disorders Grant R01 DC000128 (H.J.N.). We thank the volunteers who participated and our American Sign Language interpreter Candice Kingrey for her important assistance. Scott Frey, Jolinda Smith, Bill Troyer, Tara Armstrong, and Tony Mecum provided technical contributions to the experimental apparatus. Joseph Wekselblatt assisted with data analysis. Scott Watrous and Tara Armstrong assisted with data collection.