Search tips
Search criteria

Results 1-25 (1203779)

Clipboard (0)

Related Articles

1.  Binaural Fusion and Listening Effort in Children Who Use Bilateral Cochlear Implants: A Psychoacoustic and Pupillometric Study 
PLoS ONE  2015;10(2):e0117611.
Bilateral cochlear implants aim to provide hearing to both ears for children who are deaf and promote binaural/spatial hearing. Benefits are limited by mismatched devices and unilaterally-driven development which could compromise the normal integration of left and right ear input. We thus asked whether children hear a fused image (ie. 1 vs 2 sounds) from their bilateral implants and if this “binaural fusion” reduces listening effort. Binaural fusion was assessed by asking 25 deaf children with cochlear implants and 24 peers with normal hearing whether they heard one or two sounds when listening to bilaterally presented acoustic click-trains/electric pulses (250 Hz trains of 36 ms presented at 1 Hz). Reaction times and pupillary changes were recorded simultaneously to measure listening effort. Bilaterally implanted children heard one image of bilateral input less frequently than normal hearing peers, particularly when intensity levels on each side were balanced. Binaural fusion declined as brainstem asymmetries increased and age at implantation decreased. Children implanted later had access to acoustic input prior to implantation due to progressive deterioration of hearing. Increases in both pupil diameter and reaction time occurred as perception of binaural fusion decreased. Results indicate that, without binaural level cues, children have difficulty fusing input from their bilateral implants to perceive one sound which costs them increased listening effort. Brainstem asymmetries exacerbate this issue. By contrast, later implantation, reflecting longer access to bilateral acoustic hearing, may have supported development of auditory pathways underlying binaural fusion. Improved integration of bilateral cochlear implant signals for children is required to improve their binaural hearing.
PMCID: PMC4323344  PMID: 25668423
2.  Probe Microphone Measurements: 20 Years of Progress 
Trends in Amplification  2001;5(2):35-68.
Probe-microphone testing was conducted in the laboratory as early as the 1940s (e.g., the classic work of Wiener and Ross, reported in 1946), however, it was not until the late 1970s that a “dispenser friendly” system was available for testing hearing aids in the real ear. In this case, the term “dispenser friendly,” is used somewhat loosely. The 1970s equipment that I'm referring to was first described in a paper that was presented by Earl Harford, Ph.D. in September of 1979 at the International Ear Clinics' Symposium in Minneapolis. At this meeting, Earl reported on his clinical experiences of testing hearing aids in the real ear using a miniature (by 1979 standards) Knowles microphone. The microphone was coupled to an interfacing impedance matching system (developed by David Preves, Ph.D., who at the time worked at Starkey Laboratories) which could be used with existing hearing aid analyzer systems (see Harford, 1980 for review of this early work). Unlike today's probe tube microphone systems, this early method of clinical real-ear measurement involved putting the entire microphone (about 4mm by 5mm by 2mm) in the ear canal down by the eardrum of the patient. If you think cerumen is a problem with probe-mic measurements today, you should have seen the condition of this microphone after a day's work!
While this early instrumentation was a bit cumbersome, we quickly learned the advantages that probe-microphone measures provided in the fitting of hearing aids. We frequently ran into calibration and equalization problems, not to mention a yelp or two from the patient, but the resulting information was worth the trouble.
Help soon arrived. In the early 1980s, the first computerized probe-tube microphone system, the Rastronics CCI-10 (developed in Denmark by Steen Rasmussen), entered the U.S. market (Nielsen and Rasmussen, 1984). This system had a silicone tube attached to the microphone (the transmission of sound through this tube was part of the calibration process), which eliminated the need to place the microphone itself in the ear canal. By early 1985, three or four different manufactures had introduced this new type of computerized probe-microphone equipment, and this hearing aid verification procedure became part of the standard protocol for many audiology clinics. At his time, the POGO (Prescription Of Gain and Output) and Libby 1/3 prescriptive fitting methods were at the peak of their popularity, and a revised NAL (National Acoustic Laboratories) procedure was just being introduced. All three of these methods were based on functional gain, but insertion gain easily could be substituted, and therefore, manufacturers included calculation of these prescriptive targets as part of the probe-microphone equipment software. Audiologists, frustrated with the tedious and unreliable functional gain procedure they had been using, soon developed a fascination with matching real-ear results to prescriptive targets on a computer monitor.
In some ways, not a lot has changed since those early days of probe-microphone measurements. Most people who use this equipment simply run a gain curve for a couple inputs and see if it's close to prescriptive target—something that could be accomplished using the equipment from 1985. Contrary to the predictions of many, probe-mic measures have not become the “standard hearing aid verification procedure.” (Mueller and Strouse, 1995). There also has been little or no increase in the use of this equipment in recent years. In 1998, I reported on a survey that was conducted by The Hearing Journal regarding the use of probe-microphone measures (Mueller, 1998). We first looked at what percent of people dispensing hearing aids own (or have immediate access to) probe-microphone equipment. Our results showed that 23% of hearing instrument specialists and 75% of audiologists have this equipment. Among audiologists, ownership varied among work settings: 91% for hospitals/clinics, 73% for audiologists working for physicians, and 69% for audiologists in private practice. But more importantly, and a bit puzzling, was the finding that showed that nearly one half of the people who fit hearing aids and have access to this equipment, seldom or never use it.
I doubt that the use rate of probe-microphone equipment has changed much in the last three years, and if anything, I suspect it has gone down. Why do I say that? As programmable hearing aids have become the standard fitting in many clinics, it is tempting to become enamoured with the simulated gain curves on the fitting screen, somehow believing that this is what really is happening in the real ear. Additionally, some dispensers have been told that you can't do reliable probe-mic testing with modern hearing aids—this of course is not true, and we'll address this issue in the Frequently Asked Questions portion of this paper.
The infrequent use of probe-mic testing among dispensers is discouraging, and let's hope that probe-mic equipment does not suffer the fate of the rowing machine stored in your garage. A lot has changed over the years with the equipment itself, and there are also expanded clinical applications and procedures. We have new manufacturers, procedures, acronyms and noises. We have test procedures that allow us to accurately predict the output of a hearing aid in an infant's ear. We now have digital hearing aids, which provide us the opportunity to conduct real-ear measures of the effects of digital noise reduction, speech enhancement, adaptive feedback, expansion, and all the other features. Directional microphone hearing aids have grown in popularity and what better way to assess the real-ear directivity than with probe-mic measures? The array of assistive listening devices has expanded, and so has the role of the real-ear assessment of these products. And finally, with today's PC -based systems, we can program our hearing aids and simultaneously observe the resulting real-ear effects on the same fitting screen, or even conduct an automated target fitting using earcanal monitoring of the output. There have been a lot of changes, and we'll talk about all of them in this issue of Trends.
PMCID: PMC4168927  PMID: 25425897
3.  The natural history of sound localization in mammals – a story of neuronal inhibition 
Our concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds) and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems. Here we argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals is highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs. We emphasize that a consideration of the convergent nature of neuronal mechanisms will significantly increase the explanatory power of studies of spatial processing in both mammals and birds.
PMCID: PMC4181121  PMID: 25324726
MSO; LSO; evolution; glycine; GABA; archosaurs; birds; binaural hearing
4.  Directional hearing: from biophysical binaural cues to directional hearing outdoors 
When insects communicate by sound, or use acoustic cues to escape predators or detect prey or hosts they have to localize the sound in most cases, to perform adaptive behavioral responses. In the case of particle velocity receivers such as the antennae of mosquitoes, directionality is no problem because such receivers are inherently directional. Insects equipped with bilateral pairs of tympanate ears could principally make use of binaural cues for sound localization, like all other animals with two ears. However, their small size is a major problem to create sufficiently large binaural cues, with respect to both interaural time differences (ITDs, because interaural distances are so small), but also with respect to interaural intensity differences (IIDs), since the ratio of body size to the wavelength of sound is rather unfavorable for diffractive effects. In my review, I will only shortly cover these biophysical aspects of directional hearing. Instead, I will focus on aspects of directional hearing which received relatively little attention previously, the evolution of a pressure difference receiver, 3D-hearing, directional hearing outdoors, and directional hearing for auditory scene analysis.
PMCID: PMC4282874  PMID: 25231204
Directional hearing; Pressure difference receiver; Binaural cues; Interaural time difference; Interaural intensity difference
5.  Effects of unilateral input and mode of hearing in the better ear: Self-reported performance using the Speech, Spatial and Qualities of Hearing Scale 
Ear and hearing  2014;35(1):10.1097/AUD.0b013e3182a3648b.
To evaluate effects of hearing mode (normal hearing, cochlear implant or hearing aid) on everyday communication among adult unilateral listeners using the Speech, Spatial and Qualities of Hearing scale (SSQ). Individuals with one good, naturally hearing ear were expected to have higher overall ratings than unilateral listeners dependent on a cochlear implant or hearing aid. We anticipated that listening environments reliant on binaural processing for successful communication would be rated most disabling by all unilateral listeners. Regardless of hearing mode, all hearing-impaired participants were expected to have lower ratings than individuals with normal hearing bilaterally. A secondary objective was to compare post-treatment SSQ results of participants who subsequently obtained a cochlear implant for the poorer hearing ear to those of participants with a single normal hearing ear.
Participants were 87 adults recruited as part of ongoing research investigating asymmetric hearing effects. Sixty-six participants were unilateral listeners who had one unaided/non-implanted severe to profound hearing loss ear and were grouped based on hearing mode of the better ear: 30 had one normal hearing ear (i.e., unilateral hearing loss participants); 20 had a unilateral cochlear implant; and 16 had a unilateral hearing aid. Data were also collected from 21 normal-hearing individuals, as well as a subset of participants who subsequently received a cochlear implant in the poorer ear and thus became bilateral listeners. Data analysis was completed at the domain and subscale levels.
A significant mode-of-hearing group effect for the hearing-impaired participants (i.e. with unilateral hearing loss, unilateral cochlear implant or unilateral hearing aid) was identified for two domains (Speech and Qualities) and six subscales (Speech in Quiet, Speech in Noise, Speech in Speech Contexts, Multiple Speech Stream Processing and Switching, Identification of Sound and Objects, and Sound Quality and Naturalness). There was no significant mode-of-hearing group effect for the Spatial domain or the other four subscales (Localization, Distance and Movement, Segregation of Sounds, and Listening Effort). Follow-up analysis indicated the unilateral normal hearing ear group had significantly higher ratings than the unilateral cochlear implant and/or hearing aid groups for the Speech domain and four of the ten subscales; neither the cochlear implant nor hearing aid group had subscale ratings significantly higher than each other or the unilateral hearing loss group. Audibility and sound quality imparted by hearing mode were identified as factors related to subjective listening experience. After cochlear implantation to restore bilateral hearing, SSQ ratings for bilateral cochlear implant and/or cochlear implant plus hearing aid participants were significantly higher than those of the unilateral hearing loss group for Speech in Quiet, Speech in Noise, Localization, Distance and Movement, Listening Effort, and the Spatial domain. Hearing-impaired individuals had significantly poorer ratings in all areas compared to those with bilateral normal hearing.
Adults reliant on a single ear, irrespective of better ear hearing mode, including those with one normal hearing ear, are at a disadvantage in all aspects of everyday listening and communication. Audibility and hearing mode were shown to differentially contribute to listening experience.
PMCID: PMC3872501  PMID: 24084062
Unilateral hearing loss; Hearing disability; Cochlear implant; Hearing aid
6.  Cochlear Implantation in Adults with Asymmetric Hearing Loss 
Ear and Hearing  2012;33(4):521-533.
Bilateral severe-to-profound sensorineural hearing loss is a standard criterion for cochlear implantation. Increasingly, patients are implanted in one ear and continue to use a hearing aid in the non-implanted ear to improve abilities such as sound localization and speech understanding in noise. Patients with severe-to-profound hearing loss in one ear and a more moderate hearing loss in the other ear (i.e., asymmetric hearing) are not typically considered candidates for cochlear implantation. Amplification in the poorer ear is often unsuccessful due to limited benefit, restricting the patient to unilateral listening from the better ear alone. The purpose of this study was to determine if patients with asymmetric hearing loss could benefit from cochlear implantation in the poorer ear with continued use of a hearing aid in the better ear.
Ten adults with asymmetric hearing between ears participated. In the poorer ear, all participants met cochlear implant candidacy guidelines; seven had postlingual onset and three had pre/perilingual onset of severe-to-profound hearing loss. All had open-set speech recognition in the better hearing ear. Assessment measures included word and sentence recognition in quiet, sentence recognition in fixed noise (four-talker babble) and in diffuse restaurant noise using an adaptive procedure, localization of word stimuli and a hearing handicap scale. Participants were evaluated pre-implant with hearing aids and post-implant with the implant alone, the hearing aid alone in the better ear and bimodally (the implant and hearing aid in combination). Postlingual participants were evaluated at six months post-implant and pre/perilingual participants were evaluated at six and 12 months post-implant. Data analysis compared results 1) of the poorer hearing ear pre-implant (with hearing aid) and post-implant (with cochlear implant), 2) with the device(s) used for everyday listening pre- and post-implant and, 3) between the hearing aid-alone and bimodal listening conditions post-implant.
The postlingual participants showed significant improvements in speech recognition after six months cochlear implant use in the poorer ear. Five postlingual participants had a bimodal advantage over the hearing aid-alone condition on at least one test measure. On average, the postlingual participants had significantly improved localization with bimodal input compared to the hearing aid-alone. Only one pre/perilingual participant had open-set speech recognition with the cochlear implant. This participant had better hearing than the other two pre/perilingual participants in both the poorer and better ear. Localization abilities were not significantly different between the bimodal and hearing aid-alone conditions for the pre/perilingual participants. Mean hearing handicap ratings improved post-implant for all participants indicating perceived benefit in everyday life with the addition of the cochlear implant.
Patients with asymmetric hearing loss who are not typical cochlear implant candidates can benefit from using a cochlear implant in the poorer ear with continued use of a hearing aid in the better ear. For this group of ten, the seven postlingually deafened participants showed greater benefits with the cochlear implant than the pre/perilingual participants; however, further study is needed to determine maximum benefit for those with early onset of hearing loss.
PMCID: PMC3383437  PMID: 22441359
Asymmetric hearing loss; Bilateral; Bimodal; Cochlear implant; Speech recognition
7.  Using Evoked Potentials to Match Interaural Electrode Pairs with Bilateral Cochlear Implants 
Bilateral cochlear implantation seeks to restore the advantages of binaural hearing to the profoundly deaf by providing binaural cues normally important for accurate sound localization and speech reception in noise. Psychophysical observations suggest that a key issue for the implementation of a successful binaural prosthesis is the ability to match the cochlear positions of stimulation channels in each ear. We used a cat model of bilateral cochlear implants with eight-electrode arrays implanted in each cochlea to develop and test a noninvasive method based on evoked potentials for matching interaural electrodes. The arrays allowed the cochlear location of stimulation to be independently varied in each ear. The binaural interaction component (BIC) of the electrically evoked auditory brainstem response (EABR) was used as an assay of binaural processing. BIC amplitude peaked for interaural electrode pairs at the same relative cochlear position and dropped with increasing cochlear separation in either direction. To test the hypothesis that BIC amplitude peaks when electrodes from the two sides activate maximally overlapping neural populations, we measured multiunit neural activity along the tonotopic gradient of the inferior colliculus (IC) with 16-channel recording probes and determined the spatial pattern of IC activation for each stimulating electrode. We found that the interaural electrode pairings that produced the best aligned IC activation patterns were also those that yielded maximum BIC amplitude. These results suggest that EABR measurements may provide a method for assigning frequency–channel mappings in bilateral implant recipients, such as pediatric patients, for which psychophysical measures of pitch ranking or binaural fusion are unavailable.
PMCID: PMC1907379  PMID: 17225976
binaural hearing; electric stimulation; neural prosthesis; cochlear implant; inferior colliculus
8.  Using Evoked Potentials to Match Interaural Electrode Pairs with Bilateral Cochlear Implants 
Bilateral cochlear implantation seeks to restore the advantages of binaural hearing to the profoundly deaf by providing binaural cues normally important for accurate sound localization and speech reception in noise. Psychophysical observations suggest that a key issue for the implementation of a successful binaural prosthesis is the ability to match the cochlear positions of stimulation channels in each ear. We used a cat model of bilateral cochlear implants with eight-electrode arrays implanted in each cochlea to develop and test a noninvasive method based on evoked potentials for matching interaural electrodes. The arrays allowed the cochlear location of stimulation to be independently varied in each ear. The binaural interaction component (BIC) of the electrically evoked auditory brainstem response (EABR) was used as an assay of binaural processing. BIC amplitude peaked for interaural electrode pairs at the same relative cochlear position and dropped with increasing cochlear separation in either direction. To test the hypothesis that BIC amplitude peaks when electrodes from the two sides activate maximally overlapping neural populations, we measured multiunit neural activity along the tonotopic gradient of the inferior colliculus (IC) with 16-channel recording probes and determined the spatial pattern of IC activation for each stimulating electrode. We found that the interaural electrode pairings that produced the best aligned IC activation patterns were also those that yielded maximum BIC amplitude. These results suggest that EABR measurements may provide a method for assigning frequency–channel mappings in bilateral implant recipients, such as pediatric patients, for which psychophysical measures of pitch ranking or binaural fusion are unavailable.
PMCID: PMC1907379  PMID: 17225976
binaural hearing; electric stimulation; neural prosthesis; cochlear implant; inferior colliculus
9.  Statistics of Natural Binaural Sounds 
PLoS ONE  2014;9(10):e108968.
Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.
PMCID: PMC4186785  PMID: 25285658
10.  The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds 
PLoS Computational Biology  2015;11(5):e1004294.
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.
Author Summary
Ability to localize the position of a sound source is vital to many organisms, since audition provides information about areas which are not accessible visually. While its importance is undisputed, its neuronal mechanisms are not well understood. It has been observed in experimental studies that despite the crucial role of sound localization, single neurons in the auditory cortex of mammals carry very little information about the sound position. The joint activity of multiple neurons is required to accurately localize sound, and it is an open question how this computation is performed by auditory cortical circuits. In this work I propose a statistical model of natural stereo sounds. The model is based on the theoretical concept of sparse, efficient coding which has provided candidate explanations of how different sensory systems may work. When adapted to binaural sounds recorded in a natural environment, the model reveals properties highly similar to those of neurons in the mammalian auditory cortex, suggesting that mechanisms of neuronal auditory coding can be understood in terms of general, theoretical principles.
PMCID: PMC4440638  PMID: 25996373
11.  Binaural Unmasking with Bilateral Cochlear Implants 
Nearly 100,000 deaf patients worldwide have had their hearing restored by a cochlear implant (CI) fitted to one ear. However, although many patients understand speech well in quiet, even the most successful experience difficulty in noisy situations. In contrast, normal-hearing (NH) listeners achieve improved speech understanding in noise by processing the differences between the waveforms reaching the two ears. Here we show that a form of binaural processing can be achieved by patients fitted with an implant in each ear, leading to substantial improvements in signal detection in the presence of competing sounds. The stimulus in each ear consisted of a narrowband noise masker, to which a tonal signal was sometimes added; this mixture was half-wave rectified, lowpass-filtered, and then used to modulate a 1000-pps biphasic pulse train. All four CI users tested showed significantly better signal detection when the signal was presented out of phase at the two ears than when it was in phase. This advantage occurred even though subjects only received information about the slowly varying sound envelope to be presented, contrary to previous reports that waveform fine structure dominates binaural processing. If this advantage generalizes to multichannel situations, it would demonstrate that envelope-based CI speech-processing strategies may allow patients to exploit binaural unmasking in order to improve speech understanding in noise. Furthermore, because the tested patients had been deprived of binaural hearing for eight or more years, our results show that some sensitivity to time-varying interaural cues can persist over extended periods of binaural deprivation.
PMCID: PMC2504627  PMID: 16941078
cochlear implants; binaural hearing; masking level difference; signal detection
12.  Investigating Long-Term Effects of Cochlear Implantation in Single-Sided Deafness: A Best Practice Model for Longitudinal Assessment of Spatial Hearing Abilities and Tinnitus Handicap 
To evaluate methods for measuring long-term benefits of cochlear implantation in a patient with single-sided deafness (SSD) with respect to spatial hearing and to document improved quality of life due to reduced tinnitus.
A single adult male with profound right-sided sensorineural hearing loss and normal hearing in the left ear who underwent right-sided cochlear implantation.
The subject was evaluated at 6, 9, 12, and 18 months after implantation on speech intelligibility with specific target-masker configurations, sound localization accuracy, audiological performance, and tinnitus handicap. Testing conditions involved the acoustic (NH) ear only, the cochlear implant (CI) ear (acoustic ear plugged), and the bilateral condition (CI+NH). Measures of spatial hearing included speech intelligibility improvement due to spatial release from masking (SRM) and sound localization. In addition, traditional measures known as “head shadow,” “binaural squelch” and “binaural summation,” were evaluated.
The best indicator for improved speech intelligibility was SRM, in which both ears are activated but the relative locations of target and masker(s) are manipulated. Measures that compare performance with a single ear to performance utilizing bilateral auditory input indicated evidence of the ability to integrate inputs across the ears, possibly reflecting early binaural processing, with 12 months of bilateral input. Sound localization accuracy improved with addition of the implant, and a large improvement with respect to tinnitus handicap was observed.
Cochlear implantation resulted in improved sound localization accuracy when compared to performance utilizing only the NH ear, and reduced tinnitus handicap was observed with use of the implant. The use of SRM addresses some of the current limitations of traditional measures of spatial and binaural hearing, as spatial cues related to target and maskers are manipulated, rather than the ear(s) tested. Sound testing methods and calculations described here are therefore recommended for assessing performance of a larger sample size of individuals with SSD who receive a CI.
PMCID: PMC4334463  PMID: 25158615
13.  Perception of Binaural Cues Develops in Children Who Are Deaf through Bilateral Cochlear Implantation 
PLoS ONE  2014;9(12):e114841.
There are significant challenges to restoring binaural hearing to children who have been deaf from an early age. The uncoordinated and poor temporal information available from cochlear implants distorts perception of interaural timing differences normally important for sound localization and listening in noise. Moreover, binaural development can be compromised by bilateral and unilateral auditory deprivation. Here, we studied perception of both interaural level and timing differences in 79 children/adolescents using bilateral cochlear implants and 16 peers with normal hearing. They were asked on which side of their head they heard unilaterally or bilaterally presented click- or electrical pulse- trains. Interaural level cues were identified by most participants including adolescents with long periods of unilateral cochlear implant use and little bilateral implant experience. Interaural timing cues were not detected by new bilateral adolescent users, consistent with previous evidence. Evidence of binaural timing detection was, for the first time, found in children who had much longer implant experience but it was marked by poorer than normal sensitivity and abnormally strong dependence on current level differences between implants. In addition, children with prior unilateral implant use showed a higher proportion of responses to their first implanted sides than children implanted simultaneously. These data indicate that there are functional repercussions of developing binaural hearing through bilateral cochlear implants, particularly when provided sequentially; nonetheless, children have an opportunity to use these devices to hear better in noise and gain spatial hearing.
PMCID: PMC4273969  PMID: 25531107
14.  Abnormal Binaural Spectral Integration in Cochlear Implant Users 
Bimodal stimulation, or stimulation of a cochlear implant (CI) together with a contralateral hearing aid (HA), can improve speech perception in noise However, this benefit is variable, and some individuals even experience interference with bimodal stimulation. One contributing factor to this variability may be differences in binaural spectral integration (BSI) due to abnormal auditory experience. CI programming introduces interaural pitch mismatches, in which the frequencies allocated to the electrodes (and contralateral HA) differ from the electrically stimulated cochlear frequencies. Previous studies have shown that some, but not all, CI users adapt pitch perception to reduce this mismatch. The purpose of this study was to determine whether broadened BSI may also reduce the perception of mismatch. Interaural pitch mismatches and dichotic pitch fusion ranges were measured in 21 bimodal CI users. Seventeen subjects with wide fusion ranges also conducted a task to pitch match various fused electrode–tone pairs. All subjects showed abnormally wide dichotic fusion frequency ranges of 1–4 octaves. The fusion range size was weakly correlated with the interaural pitch mismatch, suggesting a link between broad binaural pitch fusion and large interaural pitch mismatch. Dichotic pitch averaging was also observed, in which a new binaural pitch resulted from the fusion of the original monaural pitches, even when the pitches differed by as much as 3–4 octaves. These findings suggest that abnormal BSI, indicated by broadened fusion ranges and spectral averaging between ears, may account for speech perception interference and nonoptimal integration observed with bimodal compared with monaural hearing device use.
PMCID: PMC3946135  PMID: 24464088
cochlear implants; hearing aids; bimodal; pitch; fusion
15.  Comparison of the benefits of cochlear implantation versus contra-lateral routing of signal hearing aids in adult patients with single-sided deafness: study protocol for a prospective within-subject longitudinal trial 
Individuals with a unilateral severe-to-profound hearing loss, or single-sided deafness, report difficulty with listening in many everyday situations despite having access to well-preserved acoustic hearing in one ear. The standard of care for single-sided deafness available on the UK National Health Service is a contra-lateral routing of signals hearing aid which transfers sounds from the impaired ear to the non-impaired ear. This hearing aid has been found to improve speech understanding in noise when the signal-to-noise ratio is more favourable at the impaired ear than the non-impaired ear. However, the indiscriminate routing of signals to a single ear can have detrimental effects when interfering sounds are located on the side of the impaired ear. Recent published evidence has suggested that cochlear implantation in individuals with a single-sided deafness can restore access to the binaural cues which underpin the ability to localise sounds and segregate speech from other interfering sounds.
The current trial was designed to assess the efficacy of cochlear implantation compared to a contra-lateral routing of signals hearing aid in restoring binaural hearing in adults with acquired single-sided deafness. Patients are assessed at baseline and after receiving a contra-lateral routing of signals hearing aid. A cochlear implant is then provided to those patients who do not receive sufficient benefit from the hearing aid. This within-subject longitudinal design reflects the expected care pathway should cochlear implantation be provided for single-sided deafness on the UK National Health Service. The primary endpoints are measures of binaural hearing at baseline, after provision of a contra-lateral routing of signals hearing aid, and after cochlear implantation. Binaural hearing is assessed in terms of the accuracy with which sounds are localised and speech is perceived in background noise. The trial is also designed to measure the impact of the interventions on hearing- and health-related quality of life.
This multi-centre trial was designed to provide evidence for the efficacy of cochlear implantation compared to the contra-lateral routing of signals. A purpose-built sound presentation system and established measurement techniques will provide reliable and precise measures of binaural hearing.
Trial registration
Current Controlled Trials (05/JUL/2013)
PMCID: PMC4141989  PMID: 25152694
Cochlear implantation; Single-sided deafness; Unilateral hearing loss; Contra-lateral routing of signals; Hearing aid; Binaural hearing; Spatial listening
16.  Cochlear implantation with hearing preservation yields significant benefit for speech recognition in complex listening environments 
Ear and hearing  2013;34(4):413-425.
The aim of this study was to assess the benefit of having preserved acoustic hearing in the implanted ear for speech recognition in complex listening environments.
The current study included a within subjects, repeated-measures design including 21 English speaking and 17 Polish speaking cochlear implant recipients with preserved acoustic hearing in the implanted ear. The patients were implanted with electrodes that varied in insertion depth from 10 to 31 mm. Mean preoperative low-frequency thresholds (average of 125, 250 and 500 Hz) in the implanted ear were 39.3 and 23.4 dB HL for the English- and Polish-speaking participants, respectively. In one condition, speech perception was assessed in an 8-loudspeaker environment in which the speech signals were presented from one loudspeaker and restaurant noise was presented from all loudspeakers. In another condition, the signals were presented in a simulation of a reverberant environment with a reverberation time of 0.6 sec. The response measures included speech reception thresholds (SRTs) and percent correct sentence understanding for two test conditions: cochlear implant (CI) plus low-frequency hearing in the contralateral ear (bimodal condition) and CI plus low-frequency hearing in both ears (best aided condition). A subset of 6 English-speaking listeners were also assessed on measures of interaural time difference (ITD) thresholds for a 250-Hz signal.
Small, but significant, improvements in performance (1.7 – 2.1 dB and 6 – 10 percentage points) were found for the best-aided condition vs. the bimodal condition. Postoperative thresholds in the implanted ear were correlated with the degree of EAS benefit for speech recognition in diffuse noise. There was no reliable relationship among measures of audiometric threshold in the implanted ear nor elevation in threshold following surgery and improvement in speech understanding in reverberation. There was a significant correlation between ITD threshold at 250 Hz and EAS-related benefit for the adaptive SRT.
Our results suggest that (i) preserved low-frequency hearing improves speech understanding for CI recipients (ii) testing in complex listening environments, in which binaural timing cues differ for signal and noise, may best demonstrate the value of having two ears with low-frequency acoustic hearing and (iii) preservation of binaural timing cues, albeit poorer than observed for individuals with normal hearing, is possible following unilateral cochlear implantation with hearing preservation and is associated with EAS benefit. Our results demonstrate significant communicative benefit for hearing preservation in the implanted ear and provide support for the expansion of cochlear implant criteria to include individuals with low-frequency thresholds in even the normal to near-normal range.
PMCID: PMC3742689  PMID: 23446225
reverberation; noise; cochlear implant; EAS; hearing preservation; bimodal; hybrid; interaural time difference (ITD)
17.  Developmental plasticity of spatial hearing following asymmetric hearing loss: context-dependent cue integration and its clinical implications 
Under normal hearing conditions, comparisons of the sounds reaching each ear are critical for accurate sound localization. Asymmetric hearing loss should therefore degrade spatial hearing and has become an important experimental tool for probing the plasticity of the auditory system, both during development and adulthood. In clinical populations, hearing loss affecting one ear more than the other is commonly associated with otitis media with effusion, a disorder experienced by approximately 80% of children before the age of two. Asymmetric hearing may also arise in other clinical situations, such as after unilateral cochlear implantation. Here, we consider the role played by spatial cue integration in sound localization under normal acoustical conditions. We then review evidence for adaptive changes in spatial hearing following a developmental hearing loss in one ear, and show that adaptation may be achieved either by learning a new relationship between the altered cues and directions in space or by changing the way different cues are integrated in the brain. We next consider developmental plasticity as a source of vulnerability, describing maladaptive effects of asymmetric hearing loss that persist even when normal hearing is provided. We also examine the extent to which the consequences of asymmetric hearing loss depend upon its timing and duration. Although much of the experimental literature has focused on the effects of a stable unilateral hearing loss, some of the most common hearing impairments experienced by children tend to fluctuate over time. We therefore propose that there is a need to bridge this gap by investigating the effects of recurring hearing loss during development, and outline recent steps in this direction. We conclude by arguing that this work points toward a more nuanced view of developmental plasticity, in which plasticity may be selectively expressed in response to specific sensory contexts, and consider the clinical implications of this.
PMCID: PMC3873525  PMID: 24409125
auditory localization; binaural; monaural; conductive hearing loss; adaptation; learning; cortex; midbrain
18.  Neural tuning matches frequency-dependent time differences between the ears 
eLife  null;4:e06072.
The time it takes a sound to travel from source to ear differs between the ears and creates an interaural delay. It varies systematically with spatial direction and is generally modeled as a pure time delay, independent of frequency. In acoustical recordings, we found that interaural delay varies with frequency at a fine scale. In physiological recordings of midbrain neurons sensitive to interaural delay, we found that preferred delay also varies with sound frequency. Similar observations reported earlier were not incorporated in a functional framework. We find that the frequency dependence of acoustical and physiological interaural delays are matched in key respects. This suggests that binaural neurons are tuned to acoustical features of ecological environments, rather than to fixed interaural delays. Using recordings from the nerve and brainstem we show that this tuning may emerge from neurons detecting coincidences between input fibers that are mistuned in frequency.
eLife digest
When you hear a sound, such as someone calling your name, it is often possible to make a good estimate of where that sound came from. If the sound came from the left, it would reach your left ear before your right ear, and vice versa if the sound originated from your right. The time that passes between the sound reaching each ear is known as the ‘interaural time difference’. Previous research has suggested that specific neurons in the brain respond to specific interaural time differences, and the brain then uses this interaural time difference to locate the sound.
Sounds come in various frequencies from high-pitched alarms to low bass tones, and how a neuron responds to interaural time differences appears to change according to the frequency of the sound being played. For example, a given neuron may respond to a 200- microsecond interaural time difference when a tone is played at a high frequency, but show no response to this time difference when the tone is played at a low frequency. To date, researchers had been unable to explain why this occurs.
Here, Benichoux et al. investigated this topic by playing a variety of sounds to anaesthetized cats. Electrodes were used to record the responses of individual neurons in the cats' brains, and the properties of the sound waves that reached the cats' ears were also recorded. These experiments revealed that the time it took a sound to travel from a location to each of the cats' ears, and consequently the interaural time difference, depended on whether it was a high-pitched or a low-pitched sound. This happened because different properties of the environment, such as the angle of the cat's head, affected specific frequencies in different ways.
As expected, the neurons' responses were also affected by sound frequency. Indeed, the neurons' behaviour mirrored that of the sound waves themselves. This shows that neurons do not, as previously thought, simply react to specific interaural differences. Instead, these neurons use both sound frequency and interaural time differences to produce a thorough approximation of the sound's location. The precise mechanisms that generate this brain adaptation to the animal's environment remain to be determined.
PMCID: PMC4439524  PMID: 25915620
electrophysiology; auditory brainstem; cat; other
19.  Studies on Bilateral Cochlear Implants at the University of Wisconsin’s Binaural Hearing and Speech Lab 
This report highlights research projects relevant to binaural and spatial hearing in adults and children. In the past decade we have made progress in understanding the impact of bilateral cochlear implants (BiCIs) on performance in adults and children. However, BiCI users typically do not perform as well as normal hearing (NH) listeners. In this paper we describe the benefits from BiCIs compared with a single CI, focusing on measures of spatial hearing and speech understanding in noise. We highlight the fact that in BiCI listening the devices in the two ears are not coordinated, thus binaural spatial cues that are available to NH listeners are not available to BiCI users. Through the use of research processors that carefully control the stimulus delivered to each electrode in each ear, we are able to preserve binaural cues and deliver them with fidelity to BiCI users. Results from those studies are discussed as well, with a focus on the effect of age at onset of deafness and plasticity of binaural sensitivity. Our work with children has expanded both in number of subjects tested and age range included. We have now tested dozens of children ranging in age from 2-14 years. Our findings suggest that spatial hearing abilities emerge with bilateral experience. While we originally focused on studying performance in free-field, where real world listening experiments are conducted, more recently we have begun to conduct studies under carefully controlled binaural stimulation conditions with children as well. We have also studied language acquisition and speech perception and production in young CI users. Finally, a running theme of this research program is the systematic investigation of the numerous factors that contribute to spatial and binaural hearing in BiCI users. By using CI simulations (with vocoders) and studying NH listeners under degraded listening conditions, we are able to tease apart limitations due to the hardware/software of the CI systems from limitations due to neural pathology.
PMCID: PMC3517294  PMID: 22668767
20.  Changes in auditory perceptions and cortex resulting from hearing recovery after extended congenital unilateral hearing loss 
Monaural hearing induces auditory system reorganization. Imbalanced input also degrades time-intensity cues for sound localization and signal segregation for listening in noise. While there have been studies of bilateral auditory deprivation and later hearing restoration (e.g., cochlear implants), less is known about unilateral auditory deprivation and subsequent hearing improvement. We investigated effects of long-term congenital unilateral hearing loss on localization, speech understanding, and cortical organization following hearing recovery. Hearing in the congenitally affected ear of a 41 year old female improved significantly after stapedotomy and reconstruction. Pre-operative hearing threshold levels showed unilateral, mixed, moderately-severe to profound hearing loss. The contralateral ear had hearing threshold levels within normal limits. Testing was completed prior to, and 3 and 9 months after surgery. Measurements were of sound localization with intensity-roved stimuli and speech recognition in various noise conditions. We also evoked magnetic resonance signals with monaural stimulation to the unaffected ear. Activation magnitudes were determined in core, belt, and parabelt auditory cortex regions via an interrupted single event design. Hearing improvement following 40 years of congenital unilateral hearing loss resulted in substantially improved sound localization and speech recognition in noise. Auditory cortex also reorganized. Contralateral auditory cortex responses were increased after hearing recovery and the extent of activated cortex was bilateral, including a greater portion of the posterior superior temporal plane. Thus, prolonged predominant monaural stimulation did not prevent auditory system changes consequent to restored binaural hearing. Results support future research of unilateral auditory deprivation effects and plasticity, with consideration for length of deprivation, age at hearing correction and degree and type of hearing loss.
PMCID: PMC3861790  PMID: 24379761
unilateral hearing loss; congenital; conductive; stapedotomy; brain imaging; sound localization; speech recognition
21.  Binaural-Bimodal Fitting or Bilateral Implantation for Managing Severe to Profound Deafness: A Review 
Trends in Amplification  2007;11(3):161-192.
There are now many recipients of unilateral cochlear implants who have usable residual hearing in the nonimplanted ear. To avoid auditory deprivation and to provide binaural hearing, a hearing aid or a second cochlear implant can be fitted to that ear. This article addresses the question of whether better binaural hearing can be achieved with binaural/bimodal fitting (combining a cochlear implant and a hearing aid in opposite ears) or bilateral implantation. In the first part of this article, the rationale for providing binaural hearing is examined. In the second part, the literature on the relative efficacy of binaural/bimodal fitting and bilateral implantation is reviewed. Most studies on comparing either mode of bilateral stimulation with unilateral implantation reported some binaural benefits in some test conditions on average but revealed that some individuals benefited, whereas others did not. There were no controlled comparisons between binaural/bimodal fitting and bilateral implantation and no evidence to support the efficacy of one mode over the other. In the third part of the article, a crossover trial of two adults who had binaural/bimodal fitting and who subsequently received a second implant is reported. The findings at 6 and 12 months after they received their second implant indicated that binaural function developed over time, and the extent of benefit depended on which abilities were assessed for the individual. In the fourth and final parts of the article, clinical issues relating to candidacy for binaural/bimodal fitting and strategies for bimodal fitting are discussed with implications for future research.
PMCID: PMC4111363  PMID: 17709573
bimodal hearing; bilateral implantation; deafness
22.  Dichotic Hearing in Elderly Hearing Aid Users Who Choose to Use a Single-Ear Device 
Introduction Elderly individuals with bilateral hearing loss often do not use hearing aids in both ears. Because of this, dichotic tests to assess hearing in this group may help identify peculiar degenerative processes of aging and hearing aid selection.
Objective To evaluate dichotic hearing for a group of elderly hearing aid users who did not adapt to using binaural devices and to verify the correlation between ear dominance and the side chosen to use the device.
Methods A cross-sectional descriptive study involving 30 subjects from 60 to 81 years old, of both genders, with an indication for bilateral hearing aids for over 6 months, but using only a single device. Medical history, pure tone audiometry, and dichotic listening tests were all completed.
Results All subjects (100%) of the sample failed the dichotic digit test; 94% of the sample preferred to use the device in one ear because bilateral use bothered them and affected speech understanding. In 6%, the concern was aesthetics. In the dichotic digit test, there was significant predominance of the right ear over the left, and there was a significant correlation between the dominant side with the ear chosen by the participant for use of the hearing aid.
Conclusion In elderly subjects with bilateral hearing loss who have chosen to use only one hearing aid, there is dominance of the right ear over the left in dichotic listening tasks. There is a correlation between the dominant ear and the ear chosen for hearing aid fitting.
PMCID: PMC4297034  PMID: 25992120
hearing aids; auditory perception; aged; deafness; hearing
23.  Recognition and Localization of Speech by Adult Cochlear Implant Recipients Wearing a Digital Hearing Aid in the Nonimplanted Ear (Bimodal Hearing) 
The use of bilateral amplification is now common clinical practice for hearing aid users but not for cochlear implant recipients. In the past, most cochlear implant recipients were implanted in one ear and wore only a monaural cochlear implant processor. There has been recent interest in benefits arising from bilateral stimulation that may be present for cochlear implant recipients. One option for bilateral stimulation is the use of a cochlear implant in one ear and a hearing aid in the opposite nonimplanted ear (bimodal hearing).
This study evaluated the effect of wearing a cochlear implant in one ear and a digital hearing aid in the opposite ear on speech recognition and localization.
Research Design
A repeated-measures correlational study was completed.
Study Sample
Nineteen adult Cochlear Nucleus 24 implant recipients participated in the study.
The participants were fit with a Widex Senso Vita 38 hearing aid to achieve maximum audibility and comfort within their dynamic range.
Data Collection and Analysis
Soundfield thresholds, loudness growth, speech recognition, localization, and subjective questionnaires were obtained six–eight weeks after the hearing aid fitting. Testing was completed in three conditions: hearing aid only, cochlear implant only, and cochlear implant and hearing aid (bimodal). All tests were repeated four weeks after the first test session. Repeated-measures analysis of variance was used to analyze the data. Significant effects were further examined using pairwise comparison of means or in the case of continuous moderators, regression analyses. The speech-recognition and localization tasks were unique, in that a speech stimulus presented from a variety of roaming azimuths (140 degree loudspeaker array) was used.
Performance in the bimodal condition was significantly better for speech recognition and localization compared to the cochlear implant–only and hearing aid–only conditions. Performance was also different between these conditions when the location (i.e., side of the loudspeaker array that presented the word) was analyzed. In the bimodal condition, the speech-recognition and localization tasks were equal regardless of which side of the loudspeaker array presented the word, while performance was significantly poorer for the monaural conditions (hearing aid only and cochlear implant only) when the words were presented on the side with no stimulation. Binaural loudness summation of 1–3 dB was seen in soundfield thresholds and loudness growth in the bimodal condition. Measures of the audibility of sound with the hearing aid, including unaided thresholds, soundfield thresholds, and the Speech Intelligibility Index, were significant moderators of speech recognition and localization. Based on the questionnaire responses, participants showed a strong preference for bimodal stimulation.
These findings suggest that a well-fit digital hearing aid worn in conjunction with a cochlear implant is beneficial to speech recognition and localization. The dynamic test procedures used in this study illustrate the importance of bilateral hearing for locating, identifying, and switching attention between multiple speakers. It is recommended that unilateral cochlear implant recipients, with measurable unaided hearing thresholds, be fit with a hearing aid.
PMCID: PMC2876351  PMID: 19594084
Bimodal hearing; cochlear implant; hearing aid; localization; speech recognition
24.  Human Neuromagnetic Steady-State Responses to Amplitude-Modulated Tones, Speech, and Music 
Ear and Hearing  2014;35(4):461-467.
Auditory steady-state responses that can be elicited by various periodic sounds inform about subcortical and early cortical auditory processing. Steady-state responses to amplitude-modulated pure tones have been used to scrutinize binaural interaction by frequency-tagging the two ears’ inputs at different frequencies. Unlike pure tones, speech and music are physically very complex, as they include many frequency components, pauses, and large temporal variations. To examine the utility of magnetoencephalographic (MEG) steady-state fields (SSFs) in the study of early cortical processing of complex natural sounds, the authors tested the extent to which amplitude-modulated speech and music can elicit reliable SSFs.
MEG responses were recorded to 90-s-long binaural tones, speech, and music, amplitude-modulated at 41.1 Hz at four different depths (25, 50, 75, and 100%). The subjects were 11 healthy, normal-hearing adults. MEG signals were averaged in phase with the modulation frequency, and the sources of the resulting SSFs were modeled by current dipoles. After the MEG recording, intelligibility of the speech, musical quality of the music stimuli, naturalness of music and speech stimuli, and the perceived deterioration caused by the modulation were evaluated on visual analog scales.
The perceived quality of the stimuli decreased as a function of increasing modulation depth, more strongly for music than speech; yet, all subjects considered the speech intelligible even at the 100% modulation. SSFs were the strongest to tones and the weakest to speech stimuli; the amplitudes increased with increasing modulation depth for all stimuli. SSFs to tones were reliably detectable at all modulation depths (in all subjects in the right hemisphere, in 9 subjects in the left hemisphere) and to music stimuli at 50 to 100% depths, whereas speech usually elicited clear SSFs only at 100% depth.
The hemispheric balance of SSFs was toward the right hemisphere for tones and speech, whereas SSFs to music showed no lateralization. In addition, the right lateralization of SSFs to the speech stimuli decreased with decreasing modulation depth.
The results showed that SSFs can be reliably measured to amplitude-modulated natural sounds, with slightly different hemispheric lateralization for different carrier sounds. With speech stimuli, modulation at 100% depth is required, whereas for music the 75% or even 50% modulation depths provide a reasonable compromise between the signal-to-noise ratio of SSFs and sound quality or perceptual requirements. SSF recordings thus seem feasible for assessing the early cortical processing of natural sounds.
Auditory steady state responses to pure tones have been used to study subcortical and cortical processing, to scrutinize binaural interaction, and to evaluate hearing in an objective way. In daily lives, sounds that are physically much more complex sounds are encountered, such as music and speech. This study demonstrates that not only pure tones but also amplitude-modulated speech and music, both perceived to have tolerable sound quality, can elicit reliable magnetoencephalographic steady state fields. The strengths and hemispheric lateralization of the responses differed between the carrier sounds. The results indicate that steady state responses could be used to study the early cortical processing of natural sounds.
PMCID: PMC4072443  PMID: 24603544
Amplitude modulation; Auditory; Frequency tagging; Magnetoencephalography; Natural stimuli
25.  The moving minimum audible angle is smaller during self motion than during source motion 
We are rarely perfectly still: our heads rotate in three axes and move in three dimensions, constantly varying the spectral and binaural cues at the ear drums. In spite of this motion, static sound sources in the world are typically perceived as stable objects. This argues that the auditory system—in a manner not unlike the vestibulo-ocular reflex—works to compensate for self motion and stabilize our sensory representation of the world. We tested a prediction arising from this postulate: that self motion should be processed more accurately than source motion. We used an infrared motion tracking system to measure head angle, and real-time interpolation of head related impulse responses to create “head-stabilized” signals that appeared to remain fixed in space as the head turned. After being presented with pairs of simultaneous signals consisting of a man and a woman speaking a snippet of speech, normal and hearing impaired listeners were asked to report whether the female voice was to the left or the right of the male voice. In this way we measured the moving minimum audible angle (MMAA). This measurement was made while listeners were asked to turn their heads back and forth between ± 15° and the signals were stabilized in space. After this “self-motion” condition we measured MMAA in a second “source-motion” condition when listeners remained still and the virtual locations of the signals were moved using the trajectories from the first condition. For both normal and hearing impaired listeners, we found that the MMAA for signals moving relative to the head was ~1–2° smaller when the movement was the result of self motion than when it was the result of source motion, even though the motion with respect to the head was identical. These results as well as the results of past experiments suggest that spatial processing involves an ongoing and highly accurate comparison of spatial acoustic cues with self-motion cues.
PMCID: PMC4151253  PMID: 25228856
spatial hearing; head movements; auditory motion; sound localization; motion tracking; self-motion compensation

Results 1-25 (1203779)