|Home | About | Journals | Submit | Contact Us | Français|
One of the challenges for evaluating new otoprotective agents for potential benefit in human populations is availability of an established clinical paradigm with real world relevance. These studies were explicitly designed to develop a real-world digital music exposure that reliably induces temporary threshold shift (TTS) in normal hearing human subjects.
Thirty-three subjects participated in studies that measured effects of digital music player use on hearing. Subjects selected either rock or pop music, which was then presented at 93–95 (n=10), 98–100 (n=11), or 100–102 (n=12) dBA in-ear exposure level for a period of four hours. Audiograms and distortion product otoacoustic emissions (DPOAEs) were measured prior to and after music exposure. Post-music tests were initiated 15 min, 1 hr 15 min, 2 hr 15 min, and 3 hr 15 min after the exposure ended. Additional tests were conducted the following day and one week later.
Changes in thresholds after the lowest level exposure were difficult to distinguish from test-retest variability; however, TTS was reliably detected after higher levels of sound exposure. Changes in audiometric thresholds had a “notch” configuration, with the largest changes observed at 4 kHz (mean=6.3±3.9dB; range=0–13 dB). Recovery was largely complete within the first 4 hours post-exposure, and all subjects showed complete recovery of both thresholds and DPOAE measures when tested 1-week post-exposure.
These data provide insight into the variability of TTS induced by music player use in a healthy, normal-hearing, young adult population, with music playlist, level, and duration carefully controlled. These data confirm the likelihood of temporary changes in auditory function following digital music player use. Such data are essential for the development of a human clinical trial protocol that provides a highly powered design for evaluating novel therapeutics in human clinical trials. Care must be taken to fully inform potential subjects in future TTS studies, including protective agent evaluations, that some noise exposures have resulted in neural degeneration in animal models, even when both audiometric thresholds and DPOAE levels returned to pre-exposure values.
No therapeutics for the prevention of hearing loss are approved by the FDA at this time. However, animal studies have clearly demonstrated that a variety of antioxidants and other agents have the potential to reduce hearing loss occurring as a consequence of noise exposure, aminoglycoside antibiotics, the chemotherapeutic drug cisplatin, and perhaps hearing loss occurring as a function of age. Improved understanding of the mechanisms that lead to cell death and hearing loss have thus driven significant interest in the potential for development of novel human therapeutics (for recent reviews, see Abi-Hachem et al., 2010; Poirrier et al., 2010; Campbell & Le Prell, 2011; Le Prell & Bao, 2011). Because the different agents have to date been evaluated in different species using protocols with different insults and different treatment paradigms (method of delivery and duration), it is difficult, if not impossible, to directly compare or contrast efficacy across the different agents (for recent review, see Le Prell & Bao, 2011). Several promising agents shown to be effective in pre-clinical animal models of noise-induced hearing loss (NIHL) have been evaluated in human clinical trials (Kramer et al., 2006; Lin et al., 2010; Le Prell et al., 2011b; Lindblad et al., 2011), and other clinical trials are planned (see, for example, NCT00808470, NCT01345474). Clearly, the specific trial designs for these completed, ongoing, and upcoming human NIHL studies are largely driven by investigator-specific access to unique subject populations. Thus, it will be equally challenging to compare efficacy of different agents across human studies.
Design differences across studies are worthy of attention. While the majority of pre-clinical studies on the prevention of NIHL have measured reductions in permanent threshold shift (PTS), the majority of human trials to date have focused on the potential to reduce temporary threshold shift (TTS) (Attias et al., 2004; Quaranta et al., 2004; Kramer et al., 2006; Lin et al., 2010; Le Prell et al., 2011b; Lindblad et al., 2011). The clinical relevance of any drug that is shown to reduce human PTS is clear, but the use of TTS models requires some additional discussion. TTS trials require a shorter time to complete, cost less, and have decreased potential for subject attrition. Additionally, these trials may provide better control over subject safety, as subjects are not expected to develop PTS regardless of whether they are assigned to receive active treatment agents or inactive placebo. The rationale for TTS noise trials is largely based on the assumption that demonstrating reduction of TTS provides “proof of concept” for potential protection against PTS; i.e., it has some predictive value. Most agents shown to reduce TTS have also been shown to reduce PTS (i.e., ebselen, magnesium, dietary nutrient combination), although some other agents that reduce PTS have had less consistent effects in TTS models (D-methionine, N-acetylcysteine) (for detailed discussion of individual agents, see Le Prell & Bao, 2011). Thus, taken together, the data appear to suggest the agents that reduce TTS are likely to reduce PTS, but, failure to reduce TTS does not preclude the possibility that an agent will reduce PTS. These findings are consistent with existing data on the histopathological correlates of TTS and PTS (Wang et al., 2002; for recent review, see Hu, 2011) as well as the molecular response to TTS and PTS-inducing sounds (Yamashita et al., 2008). We stress the need for additional confirmatory data in PTS trials in order to extrapolate from protection against TTS to protection against PTS.
Although TTS study designs have emerged as the model of choice for initial assessment of proposed otoprotective agents, there are a number of shortcomings in the TTS models available to date. Shortcomings of previous TTS-based clinical trials include variability of noise exposure when real-world nightclub noise serves as an insult (up to 10-dB difference in exposure level across subject cohorts tested on different days, see Kramer et al., 2006), failure to measure robust TTS in subjects (Lin et al., 2010; Le Prell et al., 2011b; Lindblad et al., 2011), and use of either broad-band (Attias et al., 2004) or narrow band noise (Quaranta et al., 2004) that is unpleasant to listen to and lacks real-world relevance. Alternative TTS models for otoprotection studies could be drawn from several non-drug studies in which investigators have measured subject hearing levels after listening to music. A number of early studies followed a model in which subjects were asked to select their own listening level, resulting in significant variability in user-selected listening levels and small sample sizes for any given listening level, with TTS typically measured in only a subset of the subjects (Lee et al., 1985; Pugsley et al., 1993; Hellstrom et al., 1998). In other more recent studies, either sound levels or volume settings have been set by the investigator, resulting in more consistent exposures across subjects (Krishnamurti & Grandjean, 2003; Bhagat & Davis, 2008; Keppler et al., 2010). However, over the course of 17 songs, exposure levels varied by as much as 10 dB from song to song (Keppler et al., 2010), consistent with a recent report of a greater than 20 dB range in song levels within a sample of 326 songs played at a fixed volume setting.(Le Prell et al., 2011c). Importantly, none of the music player studies to date have resulted in reliable TTS across subjects, suggesting additional development of the music player model for use in clinical trials is still needed. To reduce song-to-song variability, a procedure for manipulating digital music files to provide a controlled, pleasant to listen to, exposure with real-world relevance was developed (Le Prell et al., 2011c). Here, we describe TTS in normal hearing listeners who listened to that manipulated music using a digital audio player (DAP). In addition to conventional audiometric assessment to detect TTS at “expected” frequencies (i.e., 3, 4, and/or 6 kHz), the current tests included additional peripheral function measures, including extended high frequency (EHF) measurement of hearing sensitivity at frequencies from 10 to 16 kHz, and repeat measurements of DPOAE amplitude.
Subjects were 33 normal-hearing young adult college student volunteers (13 male, 20 female, mean age=20.9 years; range=18–27) drawn from an initial pool of 73 volunteers (27 male, 46 female; mean age=21.3 years; range =18–31). Advertisements posted at multiple locations on the University of Florida campus invited normally hearing subjects to participate in a study of temporary changes in hearing after listening to music on a DAP. When they responded to advertisements, prospective subjects described their hearing as normal. Prospective subjects provided written informed consent1, and were then required to undergo additional screening to confirm they met the normal hearing criteria. Subjects were required to avoid loud sound for 48 hours prior to any scheduled hearing tests. All protocols and procedures were approved by Investigational Review Boards at the University of Florida (IRB-01) and the University of Michigan (IRBMED), and all data were collected under the supervision of the NIH and an NIH-selected data safety monitoring board (DSMB).
Subjects completed brief health surveys, followed by hearing and tinnitus surveys (described in Le Prell et al., 2011a). Visual examination of the ear canal and tympanic membrane was conducted to ensure normal anatomy and no presence of obstructive debris. Two of the 73 subjects had abnormal otoscopy, and were excluded from subsequent tests. After otoscopic assessment, tympanometric measures were collected using a GSI 38 immittance measurement device that was in compliance with ANSI S3.39 and IEC 601-1 criteria. Middle ear pressure (MEP), peak compensated static acoustic admittance (Peak Ytm; +200 daPa as the ear canal referent) and acoustic equivalent volume (Vea) were measured. Normal middle ear function was defined by tympanometric configurations with MEP values from −140 to +40 daPa (based on the 90% range for adults, see Margolis & Hunter, 2000), Peak Ytm values from 0.3 to 1.8 ml, and Vea values from 0.8 to 2.1 cm3. One subject failed to meet the tympanometric criterion, and was excluded from subsequent tests. Conventional pure-tone air conduction thresholds were assessed for the 70 volunteers that passed the otoscopic and tympanometric tests.
Audiometric threshold measurement was conducted using a GSI 61 diagnostic audiometer with EAR 3A insert earphones in a double-walled sound-treated test booth meeting ANSI/ASA S3.1-1999 (R2008) specifications for audiometric test rooms. The GSI 61 clinical audiometer was calibrated annually according to ANSI 3.6 1996. Pure-tone air conduction thresholds were obtained using a modified Hughson-Westlake procedure for test frequencies of 0.25, 0.5, 1, 2, 3, 4, 6, and 8 kHz, as described by Le Prell et al. (2011a). In brief, initial descent towards threshold was accomplished in 10-dB steps. Beginning with the first non-response, levels were increased by 2-dB for each non-response, and decreased by 5-dB after each correct detection response. Threshold was defined as the lowest level at which two responses were obtained out of three presentations on an ascending run. Responses were evaluated for reliability using repeat tests at 2 and 8 kHz in each ear; responses were deemed reliable if the difference between test and retest thresholds was ≤ 5 dB, a criterion previously used by Fausti et al. (1999). Bone-conduction pure-tone audiometry was conducted for test frequencies of 0.25, 0.5, 1, 2, 3, and 4 kHz if the air-conduction threshold at that frequency was between 15-dB HL and 25-dBHL. Normal threshold assessment was defined as: 1) air conduction thresholds no worse than 25 dB HL from 0.25 – 8 kHz, 2) threshold asymmetry ≤ 15 dB at all test frequencies, and 3) air-bone gaps ≤ 10 dB if air conduction threshold is ≥ 15 dB HL but ≤ 25 dB HL.
Subjects that enrolled in the study after completing the screening were compensated $10–$15 per hour for their time. On the first day of the study, subjects answered a brief series of questions regarding recent noise exposure and current tinnitus. Then, they underwent conventional pure-tone air conduction threshold testing at 0.25, 0.5, 1, 2, 3, 4, 6, 8, 10, 12.5, 14 and 16 kHz, to establish pre-music baseline threshold sensitivity. Thresholds were measured at 10, 12.5, 14, and 16 kHz using the same modified Hughson-Westlake procedure described above, but circum-aural headphones (Sennheiser HDA200; Sennheiser Electronic Corporation, Old Lyme, CT) were used in place of the insert earphones. After pure-tone thresholds were measured for both ears, DPOAE amplitude was measured using the Mimosa HearID system (Mimosa Acoustics Inc., Champaign, IL), in combination with an Etymotic Research microphone-earphone assembly (ER 10C, Etymotic Research Inc., Elk Grove Village, IL). The closed, calibrated probe assembly was coupled to the subject’s ear by a foam ear tip. Responses were elicited by two simultaneously presented ‘primary’ tones (frequencies f1 and f2) at an f2/f1 ratio of 1.2, and with intensity levels (L1 and L2) at L2=L1-10 dB. To facilitate comparisons with audiometric thresholds, f2 frequencies (2, 3, 4, 6, 8, and 12 kHz) matched the audiometric test frequencies. Measures of DPOAE response growth (input-output) with increasing stimulus level (L1=25 to 65 dB SPL, with stimulus levels decreasing in 5-dB steps within frequencies) were obtained at each of the six f2 frequencies. DPOAE amplitudes (2f1-f2) and adjacent noise floors were averaged using a simplified stopping rule; i.e., with all tests averaged over 10 seconds. The DPOAE protocol specifically followed Goldman et al. (2006), who used this DPOAE protocol to measure effects of noise on DPOAE responses in workers exposed to occupational noise insult. Other DPOAE data collection protocols are also sensitive to noise insult and should be considered for future investigations given evidence that they optimize the amplitude of the DPOAE response. For example, in their studies on the effects of noise on hearing, Marshall and colleagues (Lapsley Miller et al., 2006; Marshall et al., 2009) routinely use L1/L2 levels of 57/45, 59/50, 61/55 (based on the L1=0.4L2+39 dB formula provided by Kummer et al., 1998), and 65/45 (based on sensitivity to TTS, see Marshall et al., 2001). Another alternative to the current test protocol is drawn from recent work by Neely et al. (2005), who reported that individual optimization of L1 levels for each ear can result in larger and less variable DPOAE measurements. Subsequent to OAE tests, the music listening period was initiated.
Subjects were allowed to select from a “pop music playlist” and a “rock music playlist” loaded onto an Apple iPod®; the iPod® was selected based on its overall popularity and reported use by adolescents and young adults (Danhauer et al., 2009). The music listening period was 4-hours. The lock button was used to protect against accidental interruption of the exposure as well as mid-session changes in volume setting. Subjects were reminded that they could withdraw from the study at any time during the music listening period if they were uncomfortable, but that the music could not be interrupted or modified. Music was delivered through Etymotic 6isolator™ earphones (ER6I; Etymotic Research, Inc.), with clean earphone covers placed on the insert earphones for each subject. The ER6I earphones fit securely into the ear canal, reducing the potential for variability in listening level during an individual session, and across sessions. Most subjects used small 3-flange ear tips (ER6I-15SM); larger ear tips were available for subjects with larger ear canals (ER6I-18).
Three investigator-selected listening levels were used in three sequential studies (“DAP1”, n=10; “DAP2”, n=11; “DAP3”, n=12); lower listening levels were tested prior to higher listening levels, and DSMB and IRB approval (at both University of Florida and University of Michigan) were obtained prior to each increase in sound level, based on the demonstrated recovery of thresholds at each sequential listening level2. Sounds levels were measured with the iPod® output delivered through 6 isolater earphones (ER6I; Etymotic Research, Inc.) inserted into Type 4157 Artificial Ear Simulators (Brüel & Kjær) which conform to IEC 60711-981, ANSI S3.25-1979 (R1986), and ITU-T Rec.P 57 (Type 2). The 3-flange earphone inserts used by the subjects were used during coupler calibrations; these provided a tight seal within the external ear simulator DB2012. Spectral data were sampled virtually continuously (at 0.001 ms intervals) using the PULSE system (version 12.5, Brüel & Kjær, Denmark). These data samples entered a multi-buffer that automatically exported average sound levels (sum of 1/3-octave bands from 20 Hz to 20 kHz) for the previous 64 sec interval at 1 sec intervals; those levels are shown in Figure 1. There were 14,400 time-level samples collected for each 4 hour playlist and additional descriptive data are presented in Table 1. Playlist calibrations were repeated at the end of each study to confirm that levels were unchanged from initial device calibration.
The initial exposure level (“DAP1”) had an average level of ~94 dBA (coupler level). This listening level was explicitly selected to deliver a highly conservative starting exposure; OSHA standards define worker exposure to 94 dBA noise as a 100% dose after 4.6 hours of exposure (Table G16-a). For those who are not familiar with OSHA standards, we solve for the dose for the 4 hour exposure using the formula “Dose = 100 (C/T)” where C equals the total time of exposure at a specific noise level, and T equals the reference duration for that level. Thus, Dose=100(4 hours/4.6 hours), which we solve as Dose=87%. Two key points should be stressed. First, OSHA standards are based NOT on hazard associated with a single exposure, but rather, hazard associated with the repetition of that noise insult 5 days/week over the course of a 40-year career. Second, OSHA standards are based on free-field sound exposure, and the free-field equivalent (FFE) sound level will be less than the level measured in a coupler because sound presented in the free field is at a higher level when it reaches the tympanic membrane based on both the frequency spectrum of the sound and the resonance properties of the ear canal (Ward et al., 2003). Several studies have shown some 10–20 dB gain within the 2–4 kHz region, although sounds in the range of 2–4 kHz are clearly not the only sounds influenced by head-related transfer functions and ear canal resonance properties (Wiener & Ross, 1946; Shaw, 1975; Hellstrom, 1993; Pierson et al., 1994).
Some earlier studies report in-ear (or in-coupler) sound level data whereas other investigators have converted in-ear/in-coupler measured levels to FFE. The specific conversion from in-ear/in-coupler level to FFE requires measurement of both music spectrum and individual ear canal transfer functions. In general, however, FFE levels are typically on the order of 5 to 15 dB less than the measured in-ear level (Bradley et al., 1987; Rice et al., 1987; Skrainar et al., 1987; Turunen-Rise et al., 1991a; Worthington et al., 2009). If we make the most conservative assumption, that of a 5-dB difference between levels measured in-coupler and FFE, this 94 dBA exposure would be equivalent to an 89 dBA free field noise (9.2 hours permitted at 89 dBA; thus, 4 hours=43% dose). The sound level was increased by 5 dB for the second series of exposures (~99 dBA in coupler × 4 hours, “DAP2”). Using the 5-dB time-intensity trading rule, this would halve the permitted listening time under OSHA standards, or, if exposure time is unchanged, then it would double the dose (i.e., 4 hours=86% dose). The third study included a small (1-dB) increase in exposure level (~100 dBA in coupler × 4 hours, “DAP3”). Using the same conservative 5-dB FFE conversion, this would correspond to a 4-hour free-field equivalent level of 95-dBA; OSHA defines a 4-hour exposure to 95-dBA as a 100% dose. Thus, the exposures used here were all at or below a 100% noise dose.
As stated above, the songs included in the playlists had been digitally manipulated to adjust overall level (such that all songs were presented at the same average level), and the within-song dynamic range was minimally compressed (as described in Le Prell et al., 2011c). The purpose of the digital manipulation was to reduce level differences across songs and improve empirical control of the exposure conditions for the purpose of a highly controlled human clinical trial protocol, but to maintain the real-world relevance of the signals. Adjusting the overall level of the music tracks is not fundamentally different than the manual adjustment a listener might make when listening to music that has been digitized at different levels, and, many songs required little compression. Thus, it was not surprising that the manipulated music “sounded normal” to the investigators and the subjects. Taken together, two music playlists that were relatively constant across the 4-hour exposure (see Figure 1), but which had greater real-world relevance than pure-tone or broad-band/octave band noise insults, were used to develop a laboratory-based exposure protocol for studies that evaluate whether new therapeutic agents effectively reduce TTS.
Immediately prior to the music listening period, subjects were instructed not to adjust the volume, pause or stop the music, or skip songs. They were told that they may read, write, study, send text messages, use a laptop, or engage in any other quiet activity, and that they may visit the restroom at any time without seeking permission. The participants were instructed that they should not sleep during the listening period. Participants were checked on at 30-min intervals to ensure compliance with the study procedures during the 4-hour listening period.
Immediately after the 4-hour music-listening period, subjects were surveyed to see if they had any current tinnitus symptoms, and they were asked how the music level compared to their normal listening level. Post-music functional evaluations were then initiated. Conventional pure-tone threshold assessments (0.25–8 kHz) were initiated at 15 minutes, 1 hour 15 minutes, 2 hours 15 minutes, and 3 hours 15 minutes post-music; EHF tests (10–16 kHz) were initiated as soon as conventional hearing tests were completed. DPOAE tests began after completing EHF tests. Each session ended with a repeat survey for any current tinnitus symptoms. The series of tests was repeated the next day, and for subjects tested at the two higher exposure levels (DAP2 and DAP3), one-week later. One subject reported minor discomfort during placement of the insert earphones at the 24-hour post-music test and was referred to the supervising physician. Mild irritation of the canals was detected, but nothing warranting treatment, and the irritation fully resolved.
Inferential analyses of differences associated with the independent variables were obtained using repeated measures analyses of variance (ANOVA). Specifically, tests of main effects from these analyses and post hoc comparisons of least squares means are presented to establish the statistical significance of differences which are apparent in the tables and graphs. All analyses were carried out using PROC MIXED and PROC FREQ in version 9.1 of SAS.
DPOAE input/output (DPIO) functions were analyzed separately for each of the three studies using repeated measures ANOVA models. In each of these models, the dependent variable was DPOAE amplitude at specific f2 frequencies, which ranged from 2 kHz to 12 kHz, in response to input sound at f1 sounds levels which ranged from 25 to 65 dB SPL. Separate models were fit to compare data obtained before noise exposure to data collected at 6 different times after exposure, ranging from 15 minutes to 1 week. In these models, the ANOVA factors were 1) f1 level, 2) measurement time, 3) ear, and 4) the interaction between stimulus level and time of measurement. In order to examine differences between the three studies, we fit repeated measures ANOVA models which contained factors for 1) trial, 2) the trial by level interaction, 3) the trial by time of measurement interaction in addition to all of the factors contained in the trial specific analyses described above. Additionally, in these analyses we added factors for 1) gender, 2) gender by level interaction, 3) gender by ear interaction, and 4) gender by trial interaction. As above, we examined pair-wise comparisons between trials at specific stimulus levels.
Fifty-seven of the 70 subjects that were screened were eligible to participate. Of the 13 subjects that were not eligible, 4 subjects (~6% of total population) were excluded for thresholds > 25 dB HL at one or more frequencies, 6 subjects (~9% of total population) had > 15 dB threshold asymmetry at one or more frequencies, and 3 subjects (~4% of total population) had air-bone gaps > 10 dB at one or more frequencies. Of the 57 subjects that were eligible and invited to participate, 33 subjects completed the music listening studies and 22 subjects either declined to schedule study dates or cancelled scheduled study dates. The other two subjects were excluded by the investigator during the study; one subject could not achieve test-retest reliability within 5 dB during pre-music baseline testing on the day of the study and the other subject began the music listening period, but at the first 30-min subject check, the subject was asleep with the earphones removed. Demographic information for the 33 subjects that participated in the studies are presented in Table 2.
Average threshold sensitivity for the 70 subjects screened was ~5 dB (Figure 2A), an outcome that is consistent with other recent data from similar populations (for review, see Borchgrevink, 2003). There were no differences between right ear and left ear thresholds (all p’s ≥ 0.05) (Figure 2B). There were statistically reliable differences in hearing thresholds when male and female subjects were compared, with males having worse thresholds than females at 0.25, 0.5, 1, 4, and 6 kHz (p’s < 0.05, after applying Sattherthwaite correction for unequal sample size and/or unequal variance) (Figure 2C). Differences as a function of gender are consistent with an earlier report describing data collected during the first 56 screening tests (see Le Prell et al., 2011a for detailed discussion of screening outcomes in the first 56 subjects screened as potential participants). Subjects who were not eligible to participate in the study had worse thresholds than those who were eligible to participate at all standard audiometric frequencies except 2 kHz (p’s < 0.05, after applying Sattherthwaite correction for unequal sample size and/or unequal variance) (Figure 2D).
No consistent deficits at any of the test frequencies were measured at the lowest listening level (DAP1); however, TTS was reliably observed after the listening levels were increased (DAP2, DAP3; see Figure 3A). With higher listening levels in the DAP2 and DAP3 studies, the most robust TTS was measured at 3–4 kHz, and, as levels increased from DAP2 to DAP3, a broader range of frequencies were affected. The most widely accepted evidence for NIHL is an audiogram with a “notched” configuration in combination with a history of noise exposure, and the pattern of music-induced change shown in Figure 3A is clearly notched. Significant recovery was evident over the first 3 hours post-music, with recovery to within 2 dB of baseline the following day, and complete recovery when follow-up was completed one week later. Complete recovery to baseline was observed in all subjects. The timeline of recovery after the highest level exposure is shown in Figure 3B.
Although extended high frequency (EHF) testing in the 10–16 kHz range is often used to detect ototoxic changes before the conventional range is affected, the current data do not provide evidence for TTS at EHF frequencies after DAP use (Figure 3A). Average threshold shift 15 min post music was ±2 dB relative to baseline at the frequencies from 10 to 16 kHz. These data are not presented further.
There was significant individual variability in the amount of TTS measured 15 min post music (Figure 4A). For the subjects in the DAP3 study, several variables with the potential to influence individual TTS outcomes were considered, including pre-music threshold, ear, gender, and genre selected. A statistically significant relationship was evident between pre-music baseline threshold at 4 kHz, and TTS at 4 kHz 15 min post-music (Figure 4B). Ears with the lowest (best) thresholds prior to DAP use had the largest TTS 15 min post-music [regression line: 4 kHz shift = −6.6 + (0.307*4 kHz pre-music threshold); R=0.4263; R2=0.1817; p <0.001]. TTS was equivalent in the right and the left ears (Figure 4C), and no consistent differences between TTS in male and female subjects was detected (Figure 4D). Only 2 subjects selected the rock playlist in each study; thus, it was not possible to determine the potential effects of genre on TTS (Figure 4E).
DPOAE amplitudes were measured at 9 different sound levels for 6 different f1/f2 frequency pairs, with tests conducted pre-music and at multiple post-music test times. No reliable changes in OAE amplitude were detected in the DAP1 study cohort (not shown). In the DAP2 study cohort, statistically reliable decreases in OAE amplitude were observed for the f2=3 kHz (p<0.05) and f2=4 kHz (p<0.01) test conditions (not shown). In the DAP3 study cohort, statistically reliable decreases in OAE amplitude were observed for the f2=3 kHz (p<0.05, see Figure 5A) and f2=4 kHz (p<0.01, see Figure 5C) test conditions, as well as f2=6 kHz (p<0.01, see Figure 5E) and f2=12 kHz (p<0.05). Table 3 summarizes the statistical reliability of the changes in OAE amplitude as a function of f1 sound level at the post-1 test time. All changes in OAE amplitude returned to baseline (see Figures 5B, 5D, 5F). The most robust decreases in OAE amplitude were observed within 15–20 dB of threshold (with threshold defined as the level at which OAE amplitude is 5 dB greater than the measured noisefloor). At higher primary tone levels, fewer reliable changes in OAE amplitude were evident.
Subjects were asked post-music if they had tinnitus, and if they felt like they had any hearing loss, a sense of fullness in their ears, or any other hearing symptoms, other than tinnitus. If they reported tinnitus, they were asked to rate their tinnitus on both loudness and objectionable/bothersome scales that ranged from 1 (barely noticeable/not bothersome) to 10 (almost unbearably loud/unbearable). A total of 5 subjects reported perceived symptoms at the first post-music test. DAP1: One subject reported tinnitus but no other symptoms. That subject rated the tinnitus as “2” on both scales. DAP2: One subject reported perceived hearing loss, fullness, or other symptoms, but no tinnitus. DAP3: Three subjects in the DAP3 study reported tinnitus. For two of these subjects, there were no additional changes reported, and the tinnitus was resolved 1 hour later during the next survey. Loudness was rated “1” by both subjects, and bothersomeness was rated “1” (n=1) or “2” (n=1). For the third subject, the tinnitus lasted more than three hours; tinnitus was reported 3 hrs 15 min post-music, but not at the 24 hour post-music test. At the first test time, tinnitus loudness was rated “4” and bothersomeness was rated “5”; both ratings had decreased to “3” by the 3 hr 15 min test, with no tinnitus or other sensations reported the following day. This subject, with the longer-lasting, louder, and more bothersome tinnitus, also reported perceived hearing loss, fullness, or other symptoms, but only at the 15-min post music test. Taken together, tinnitus was not consistently reported (n=5 out of 33 subjects) even with the comparison limited to those exposed to the highest music level (n=3 out of 12 subjects). Tinnitus resolved within the first hour in most cases (4/5) and resolved within the first 24 hours in the worst case (1/5).
This study was not designed to provide detailed information on subjects’ normal music listening habits. However, we did ask subjects to qualitatively rate the loudness of the music they listened to in the study relative to their typical music listening level. Approximately 10–20% of the listeners reported that the loudness of the music they listened to in the study was about the same as their normal listening level, for each of the three listening levels (see Table 4). The majority of subjects in the DAP2 study described the study music level as somewhat louder than their normal listening level (55%). Of the subjects that participated in the DAP3 study, 42% described the music as somewhat louder than their normal listening level and 50% described it as much louder than their normal listening level. Multiple studies have measured preferred listening levels. Average listening levels are commonly reported to be on the order of 70–80 dBA in-ear/in-couple although individual subject listening levels can range from ~50 dBA to over 110 dBA (Bradley et al., 1987; Wong et al., 1990; Hodgetts et al., 2007; Torre, 2008; Hodgetts et al., 2009; Kumar et al., 2009; Epstein et al., 2010; McNeill et al., 2010; Keith et al., 2011; Portnuff et al., 2011). Thus, the subset of subjects that reported the study music levels to be common listening levels are fairly consistent with the subset of subjects that have reported high listening levels in earlier studies that were explicitly designed to assess listening level.
The three music player studies described here document the effects of 4-hours of DAP use on individual subject thresholds for three different music listening levels (~94 dBA, ~98 dBA, and ~100 dBA, coupler level), with music manipulated to be presented at relatively constant levels across time. Changes were largest at 4 kHz, with reliable changes 15 min post music at frequencies ranging from 2 to 6 kHz. Changes at or near 4 kHz are consistent with an abundant literature showing noise induces hearing changes at frequencies from 3 through 6 kHz in humans. The current data provide important insight into individual differences in vulnerability to TTS after music exposure, with baseline sensitivity at 4 kHz serving as the best predictor for TTS after music exposure. Individual differences in vulnerability have been shown after other free-field exposures (Mills et al., 2001; Strasser et al., 2003); several investigators have reported that subjects with the best thresholds prior to exposure are the most vulnerable (i.e., they have the largest TTS post-exposure) (Lindgren & Axelsson, 1986; Mills et al., 2001).
In addition to tonotopically-appropriate shifts at predicted frequencies, tonotopically-inappropriate EHF threshold shifts and cochlear histopathology have also been reported after noise insult (Fried et al., 1976; Liberman & Kiang, 1978). Consistent with the notion that such phenomena translate to humans, EHF testing has been used for detecting ototoxic changes before the conventional frequency range is affected (Jacobson et al., 1969; Fausti et al., 1984a; 1984b; Rappaport et al., 1985; Kopelman et al., 1988). With respect to music studies, hearing threshold deficits of up to 16 dB were measured in the EHF range when subjects who had used personal music players for greater than 5 years were compared to control subjects (Peng et al., 2007), and, EHF deficits have also been measured in musicians (Schmuziger et al., 2006). Importantly, TTS has been shown in humans at EHF frequencies in addition to TTS at conventional frequencies (Kuronen et al., 2003; Balatsouras et al., 2005). However, no changes were detected during EHF measurements in these studies, a finding that is consistent with the failure to detect TTS at EHF frequencies in a group of musicians tested before and after rehearsal (Schmuziger et al., 2007). Current clinical and industrial practices do not include routine monitoring for NIHL at frequencies beyond 8 kHz, and the current study provides no compelling rationale for EHF threshold tests in measuring the effects of this exposure paradigm.
DPOAE amplitude was depressed at the same frequencies at which TTS was observed, and DPOAE amplitude recovered completely at all test frequencies. The DPOAE data confirm DAP use affected outer hair cell (OHC) function, but there was no evidence suggesting the DPOAE metric was more sensitive than conventional pure-tone threshold tests for measuring the temporary effects of this music exposure paradigm. Fewer music-induced changes in DPOAE amplitude were detected at higher L1 and L2 primary tone levels; this is consistent with data from animal subjects. Ototoxic drugs (such as aminoglycoside antibiotics and loop diuretics) eliminate DPOAEs at lower L1 and L2 levels, with less disruption of DPOAEs at higher L1 and L2 levels, leading to the suggestion that the DPOAEs generated with low level tones are actively generated by intact OHCs whereas DPOAEs measured with high level tones also reflect passive cochlear motion (Brown et al., 1989; Whitehead et al., 1992a, b; Mills & Rubel, 1994, 1996). Data such as these should guide the selection of DPOAE clinical test protocols, to optimize the potential for detection of DPOAE deficits in human patients by selectively assessing active OHC response.
Data collected in this study provides evidence of DAP-induced TTS under certain specific listening conditions. There was no evidence that gender influenced the effects of music on TTS, and there was no evidence for ear asymmetries. The best predictor of TTS at the 15 min post-music test time was pre-music baseline. In general, the better the baseline hearing, the more robust the TTS induced by music exposure. Although this finding suggests that narrowing study enrollment criteria may result in less variability in TTS across subjects, previous studies suggest this may not be true. Mills et al. (1981) required that subjects have ≤ 10 dB HL thresholds, and they reported standard deviations of 7 dB with respect to TTS, which is double the standard deviation of the current TTS measurements. Our DAP3 study design is suggested as a potential paradigm for assessing new otoprotective agents, and as a common platform against which outcomes can be compared across agents. As discussed below, however, any use of this or other TTS noise models in future investigations must be preceded by a thorough review of the current and emerging literature regarding decreases in synaptic density after noise exposure. Exposures that induce TTS of ~40–50 dB threshold measured 24 hours post noise result in rapid synaptic deficits and decreased evoked potential amplitude in mice (Kujawa & Liberman, 2006, 2009; Lin et al., 2011). The TTS “threshold” below which there is no lasting synaptic change is not known, and should there be any new evidence which suggests even a small TTS that rapidly recovers is harmful, studies such as these would not be possible.
For the purposes of human clinical trial protocols for studies on otoprotective agents, development of a TTS music exposure paradigm is a significant advance. Other existing paradigms have potential strength in use of real-world noise insult, but this also serves to reduce empirical control of test conditions. For example, Kramer et al. (2006) conducted a randomized, placebo-controlled trial to evaluate prevention of TTS with 900 mg N-acetylcysteine (NAC) in 31 normal-hearing subjects who attended a nightclub. Pure-tone thresholds and DPOAE amplitude were measured before and after two hours of live music. Across the subject cohorts, average music levels during the 2-hour visits to the nightclub ranged from 92.5 dBA to 102.8 dBA, and the authors noted the uncontrolled variability in the exposure may have masked potential therapeutic effects. In that study, TTS at 4 kHz (in both treated and untreated subjects) averaged approximately 10–15 dB [depending whether pure-tones were tested immediately after leaving the nightclub (TTS=14±2 dB SEM) or 15 min later, after testing OAEs (TTS=10±2 dB SEM)]. A controlled exposure, conducted in a laboratory setting with calibrated equipment, resolves the issue of uncontrolled exposure level across groups of subjects.
More recently, in a prospective double-blind, otoprotection study, 53 male workers exposed daily to 88 to 89 dB-A occupational noise were randomly assigned to receive either NAC (1200 mg/day × 14 days) or placebo in random order as part of a within-subjects cross-over trial (Lin et al., 2010). All subjects received both treatments, and treatment order was randomized across subjects. Average shift-related TTS during placebo was 2.8 dB, compared to an average of 2.5-dB shift-related TTS during NAC treatment. Test-retest reliability is typically assumed to be on the order of 5 dB, thus, with average changes in threshold of less than 3 dB, it would be extremely difficult to measure protection. Similar challenges, in the form of little or no TTS after field-based weapons training, were reported in two additional studies that sought to evaluate potential reductions in TTS in human subjects treated with either 200 mg acetylcysteine twice/day × 2 days (Lindblad et al., 2011) or a combination of 18 mg β-carotene, 500 mg vitamin C, 400 IU vitamin E, and 315 mg magnesium × 2 days (Le Prell et al., 2011b). Use of a controlled laboratory exposure such as the current DAP3 exposure would eliminate this issue. Greater TTS in control conditions improves study power and increases opportunity to measure the actual protection conferred by a potentially effective agent; however, the potential risks to subjects as a function of experimentally induced TTS must be fully disclosed. Taken together, the DAP model described here resolves the issues in other studies to date; specifically including variability of real-world exposures that depend on production of sound outside the investigators control, and studies using subjects exposed to noise that induces only very small TTS changes. However, the potential for unanticipated risks to subjects that undergo small, brief TTS changes must be disclosed, based on the demonstrated risks associated with larger, longer lasting TTS in rodent models.
Other laboratory models can be considered for use in TTS studies (with the same caveats about disclosure of potential risks to subjects). For example, Quaranta et al. (2004) exposed human subjects to 112 dB SPL narrowband noise centered at 3 kHz for 10 min. They reported an average change of 21.5±5.9 dB at 4 kHz, measured 2 min post noise, in placebo-treated subjects; this TTS was reduced by ~5 dB in a second group of subjects that had received vitamin B12 supplements once/day for 8 days prior to noise exposure. Attias et al. (2004) similarly exposed human subjects to 90 dB sensation level (SL) white noise for 10 minutes in a prospective double-blind, otoprotection study. After completing a preliminary TTS study with no investigational agent, the 20 male subjects were randomly assigned to receive either magnesium first (122 mg/day × 10 days) or placebo first in this within-subjects cross-over trial. TTS was greatest at 4 and 6 kHz, and was reduced from ~20±5 dB in the two control conditions to ~10±2.5 dB in the magnesium condition. Smaller decreases in OAE amplitude were measured in the magnesium-treated subjects, suggesting OHC protection may have contributed to the smaller changes in pure-tone thresholds.
Although these noise models provide highly controlled exposure paradigms, and induce robust TTS, they are not without shortcomings. Shortcomings of the models include less real-world relevance of the noise signal, and the need for careful consideration of the maximum TTS measured in the most vulnerable subjects. Attias et al. (2004) reported a maximum TTS measured in any individual subject of ~40 dB. The range of human TTS outcomes was likely similar in the study by Quaranta et al. (2004), based on nearly identical means and standard deviations across the two studies. In another more recent study (not including an intervention component), a 15-min exposure to 115-dB SPL narrow-band noise centered at 2 kHz was used to induce TTS, with 26 subjects having TTS at 4 kHz ranging from 10-dB to 30-dB threshold shifts, and one subject having a 5 dB improvement in threshold sensitivity post-noise (Lichtenhan & Chertoff, 2008). Other controlled exposure models are available, such as that of Mills et al. (1981), who exposed subjects to 88 dBA noise (free-field) for 24 hours, or 91 dBA wideband noise for 8 hours on two consecutive days. Median TTS was ~15–20 dB for both exposures; the range of TTS values was not reported. Given that significant challenges in recruiting subjects to participate in lengthy and/or repeated exposure studies was noted (Mills et al., 1981), we have less enthusiasm for this latter model. Regardless of the paradigm selected, new data from rodent studies have led to new risk disclosure requirements regarding the safety of TTS studies in normal hearing human subjects. These safety considerations are discussed in detail below.
There are two recent reports of lasting neural changes in the rodent inner ear after noise insult that induces ~40–50 dB TTS measured 24 hours post-noise (Kujawa & Liberman, 2009; Lin et al., 2011), with recent corroboration from a second laboratory (Wang & Ren, 2012). First, Kujawa and Liberman (2009) reported rapid, extensive loss of synaptic contacts between inner hair cells and auditory nerve fibers 24 hours post-noise (during the period of TTS), as well as loss of synaptic contacts subsequent to recovery from the TTS threshold deficits (8-weeks post-noise). Lasting decreases in tone-evoked ABR amplitude were tonotopically correlated with the observed decrease in synaptic density. Specifically, decreases in synaptic density were apparent at frequencies of ~25 kHz and above, and noise-induced decreases in ABR amplitude were reported at 32 kHz, but not 12 kHz. Thus, at frequencies >25 kHz where threshold deficits measured 24 hours post noise were ~40 dB, there were synaptic deficits, whereas at frequencies ≤ 15 kHz, where threshold deficits were ~20 dB, there were no obvious synaptic changes. Although ABR amplitude was described only at two frequencies (12 kHz and 32 kHz), threshold shift data were provided for a wide range of frequencies. In general, at frequencies where threshold deficits measured 24 hours post noise were ~40 dB or greater (i.e., above 25 kHz), there were synaptic deficits, and at frequencies where the threshold deficits were ~20 dB or less (i.e., 15 kHz and below), there were no obvious synaptic changes.
These results were recently replicated in the guinea pig, with TTS deficits of ~40 dB or greater resulting in decreased ABR amplitude and decreased synaptic density (Lin et al., 2011). These data confirm in a second species that 40–50 dB TTS measured 24 hours post noise is harmful to the auditory nerve population. Any human noise exposure model that induces TTS reaching or exceeding 40 dB would be extremely difficult to justify given these new data. However, the greatest change in any of our human subjects to date has been 13 dB, with virtually complete recovery within 24 hours. This contrasts with the 40–50 dB deficits at 24 hours post-noise in the mouse and guinea pig studies. Those 40–50 dB deficits, measured 24 hours post noise, clearly exceed a critical boundary for lasting neural change, however, the critical boundary below which there is no lasting synaptic change is not known. Based on the lack of synaptic change at cochlear locations corresponding to frequencies where TTS was smaller, we interpret the animal data as consistent with a potential critical boundary of ~20–30 dB TTS at the 24-hour post-noise test time, with TTS changes that reach or exceed this boundary resulting in lasting synaptic change despite complete threshold recovery. Confirmatory evidence showing that smaller TTS deficits are not associated with synaptic change are critically needed to better inform assumptions regarding risk to human subjects that participate in TTS studies. The data available at this time indicate that TTS exceeding 20–30 dB at 24 hours post-noise has the potential to result in long-term neural changes, at least in rodents, and there is no reason to assume the phenomena does not extend to other mammalian species.
Taken together, the current design was conservative with respect to selection of sound levels in that the highest selected level resulted in a small TTS post-music (13 dB maximum change, measured 15 min post-music) with virtually no TTS at the 24 hour post-music time, and these exposures represented no more than a 100% noise dose as defined by OSHA standards (which assume repeated exposure 5 days/week throughout a 40-year career). We interpret the small measured threshold changes, combined with rapid recovery, and the animal data which suggest a lack of synaptic trauma at frequencies where there is less than 20 dB TTS 24 hours post-noise, to suggest these exposures are likely safe to use in future studies. However, any future investigation must include a thorough review of the current literature with respect to data that are still emerging, given that the TTS “threshold” below which there is no lasting synaptic change is not known. Should there be any new evidence which suggests even a small TTS that rapidly recovers is harmful, studies such as these would not be possible, and investigators would need to consider alternative designs, using either the less-controlled subject-selected listening levels of past studies, or other real-world settings in which noise levels are subject selected, instead of investigator defined.
DAP use is common in adolescent and young adult populations; approximately half (49%) of the subjects in this study reported recreational DAP use. Almost one-fifth of the subjects (19%) failed to meet the eligibility criteria required for participation in this study. However, most subjects reported previous exposure to other recreational sound sources, such as loud music at concerts, in nightclubs, in their cars, and at other settings, making it difficult to identify a single contributing factor associated with screening failures. Of the subjects that met the normal hearing criteria and participated in the study, the DAP1 and DAP2 music levels were identified as common listening levels by ~ 20% of the subjects in each group (DAP1: 2 of 10 subjects; DAP2: 2 of 11 subjects) and the DAP3 music level was identified as a common listening level by 8% of that group (1 of 12 subjects). Although these are small samples, the self-reported listening level comparisons suggest some subset of listeners use DAPs at relatively high listening levels (i.e., 93–100 dBA in-ear level). This finding is consistent with a well established existing literature (Bradley et al., 1987; Wong et al., 1990; Hodgetts et al., 2007; Torre, 2008; Hodgetts et al., 2009; Kumar et al., 2009; Epstein et al., 2010; McNeill et al., 2010; Keith et al., 2011; Muchnik et al., 2011; Portnuff et al., 2011).
Although DAP devices can produce sounds with the potential to damage the inner ear (Katz et al., 1982; Fligor & Cox, 2004; Hodgetts et al., 2007), the extent to which listeners use these devices at levels and durations that can induce hearing loss remains an issue of active debate (Fligor, 2006, 2009; for discussion, see editorial comments in Rabinowitz, 2010; for excellent recent review, see Portnuff et al., 2011). Survey data suggest some listeners engage in potentially risky listening behaviors, including extended listening durations, listening at high sound levels, or both (see Vogel et al., 2008; Danhauer et al., 2009; Quintanilla-Dieck et al., 2009; Shah et al., 2009; Vogel et al., 2009), but the true prevalence of risky listening behavior is unknown as listening level, duration, and frequency must all be considered. Multiple studies reveal personal music exposure that would not by itself be considered hazardous based on the occupational noise risk criteria of Leq(8)=85 dBA after adjusting in-ear sound levels to FFE (see Bradley et al., 1987; Williams, 2009; Worthington et al., 2009; Epstein et al., 2010; McNeill et al., 2010). However, background listening conditions may play a role in risky listening behavior. More than 50% of subjects had Leq(8) values of ~87 dBA in a recent DAP listening study which recruited subjects on a campus sidewalk adjacent to the entrance of a New York City subway station (Levey et al., 2011). Studies on listening level commonly suffer from the shortcoming of no opportunity to measure subject hearing levels in the field, where listening levels were assessed. Other studies, however, evaluated whether DAP use might have contributed to hearing loss in adolescents and young adults, and some data that suggest DAP use could contribute to poorer auditory thresholds.
In a recent study of Chinese youth (students at Wuhan University, ages 19–23 years), threshold deficits of up to 9 dB were measured in the conventional frequency range (with the biggest deficits at 8 kHz) when subjects who had used personal music players for greater than 5 years were compared to control subjects (Peng et al., 2007). Self-reported music player use was also significantly associated with a notched audiometric configuration in a recent study on older US adolescents (11th grade students in a Pennsylvania high school, see Sekhar et al., 2011). However, several other studies report only small differences in conventional pure-tone audiometric thresholds (e.g., 2–3 dB; see Meyer-Bisch, 1996; Kim et al., 2009) or no threshold differences for subjects that use DAPs versus those who do not (Wong et al., 1990; Mostafapour et al., 1998; Kumar et al., 2009; Shah et al., 2009). A complementary approach is the use of DPOAEs to screen for and document music-induced auditory dysfunction. Decreased DPOAE amplitude and increased DPOAE thresholds are reported in DAP users with normal auditory thresholds, with the worst OAE outcomes measured in subjects using the devices the most (i.e., >6 hrs/week use or >5 years use, see Santaolalla Montoya et al., 2008). It is not surprising that pure-tone threshold tests and DPOAE test outcomes have shown a similar pattern, as both depend on intact peripheral function. Importantly, most of these studies of conventional pure-tone thresholds or DPOAE amplitude measures do not control for the possibility of exposure to loud (non-study) sound before data collection. Another key shortcoming of these trials is a failure to delineate other factors that can influence hearing status, including other sources of previous noise exposure, diet, health, and socioeconomic status.
Data such as these have driven considerable effort to measure the potential for changes in hearing after DAP use. Across studies, some music exposures produce no TTS (Lee et al., 1985; Krishnamurti & Grandjean, 2003; Bhagat & Davis, 2008) whereas other exposures can result in TTS although results vary across subjects (Lee et al., 1985; Miyake & Kumashiro, 1986; Turunen-Rise et al., 1991a, b; Hellstrom et al., 1998). The most systematic effort to measure TTS with increasing DAP exposure was by Keppler et al. (2010), who asked subjects to listen to music at 50%, 75%, and >75% gain settings on an iPod® for 1 hour. However, TTS was small (~1 dB) and there was no evidence for changes in the 3–6 kHz range, where TTS is most commonly detected (for additional commentary, see Zardouz et al., 2010). Perhaps the most directly relevant comparisons for the exposures used here come from Lee et al. (1985), who reported that 9 volunteers who chose to listen to music at 90 to 92 dB SPL (coupler level) for 3 hours had no significant threshold shift, 6 volunteers who chose to listen to music at 98 to 99 dB SPL (coupler level) for 3 hours had TTS of 10 dB at one or more frequencies, and a single volunteer who chose to listen to music at 103 to 104 dB SPL (coupler level) for 3 hours had TTS of 30 dB at 4 kHz, with smaller shifts at other frequencies. The data in the studies presented here adds to the literature on music-induced TTS, and specifically defines the extent and variability of TTS across 10–12 subjects per listening level, for the three levels tested. However, because the music used here was digitally manipulated, there was less rapid dynamic change and less song-to-song variability than in other studies. While this is a strength from the perspective of a controlled exposure designed for use in a clinical trial, it may have some subtle influence on the extent of TTS that may have been measured if the music had not been modified.
The most important outcome of the current study is the development of a music exposure paradigm that results in a small but reliable (mean=6.3±3.9 dB; range=0–13 dB) TTS that quickly recovers over the first three hours post-music. The exposure can be carefully controlled by the investigator, has significant real-world relevance, and is more pleasant to listen to than pure-tones or noise bands. The development of a laboratory-based TTS model resolves many of the shortcomings of previous studies in which investigators have sought to determine whether a potential drug agent reduces TTS but failed to obtain conclusive evidence. In some cases, study conclusions were limited by the minimal TTS in controls (Lin et al., 2010; Le Prell et al., 2011b; Lindblad et al., 2011), and in others, study conclusions were limited by the variability of the exposures across subjects (Kramer et al., 2006). The model developed here also addresses the potential for safety concerns with noise exposures that induce robust TTS, which we define here as a 40 dB threshold shift, based on the 24-hour post-noise TTS data from animal studies (Kujawa & Liberman, 2006, 2009; Lin et al., 2011). The critical boundary below which there is no effect of TTS on synaptic density and evoked potential amplitude has not been established, although the lack of change at cochlear locations corresponding to frequencies at which TTS was ~20 dB or less (24 hours post-noise) suggests one possible boundary at which potential risks are increased. Exposures that induce smaller TTS changes are clearly more conservative than exposures that induce larger TTS changes, and the exposures described here do not provide more than 100% noise dose. The use of digital music is more pleasant and has greater real-world relevance than pure-tones or noise bands. Use of a consistent model across agents for assessing potential therapeutic benefit will allow an opportunity for comparisons across agents, as data on otoprotective agents begin to emerge.
The project was supported by U01 DC 008423 from the National Institute On Deafness And Other Communication Disorders, National Institutes of Health, awarded to the University of Michigan (JMM), via a subcontract awarded to the University of Florida (CGL). We thank the members of the NIH-selected Data Safety Monitoring Board, as well as Gordon Hughes at the NIH, for helpful feedback and suggestions throughout their oversight of these studies, and their review of on an earlier version of this manuscript. We are grateful to Sharon Kujawa for guidance on the DPOAE protocols and discussion of safety issues, as well as her comments and suggestions on an earlier version of this manuscript. We also gratefully acknowledge the contributions of Jim Wyatt, Marcello Pineiro, and Robert Trahoitis at Brüel and Kjær, who were instrumental in developing calibration protocols. Finally, we thank Sebastian de la Calle, Kari Morgenstein, Marissa Rosa, Jason Schmitt, and Lindsey Willis-Banks, who consented and tested subjects at the University of Florida, and Susan DeRemer at the University of Michigan, who provided assistance with IRB applications.
Support: The project was supported by U01 DC 008423 from the National Institute On Deafness And Other Communication Disorders, National Institutes of Health.
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
1Care must be taken to fully inform potential subjects in future TTS studies, including those that involve therapeutic interventions, that some noise exposures have resulted in neural degeneration in animal models, even when both audiometric thresholds and DPOAE levels returned to pre-exposure values; see Safety Considerations in Discussion.
2Reports describing reduced synaptic density and decreased evoked potential amplitude after noise exposures that induced ~40–50 dB TTS in mice and guinea pigs were discussed with the DSMB and shared with the IRB as part of the process of evaluating potential risks to subjects.