There are many forms of hearing disorders: the two most prominent are conductive and neurosensory losses. Patients with conductive loss are viable candidates for hearing aids. Hearing aids amplify the sound waves from the environment causing the tympanic membrane to become more responsive, which allows the sound energy to reach the inner ear at audible levels. Necessarily, hearing aids rely on a healthy inner ear with functional mechanoelectric transducers: the hair cells. The other form of hearing loss is neurosensory, most commonly caused by a loss of hair cells. In patients suffering from this form of loss, amplification of sound waves by hearing aids will not enhance sound perception as the sensory organ of Corti is critically damaged through the loss of hair cells. Currently, CI are the only treatment available to these patients. CI take the sound waves directly from the environment to the spiral ganglion, bypassing much of the inner ear. Only patients with complete or nearly complete deafness are considered candidates for CI.
In 1984 the first CI were approved for commercial distribution and by 2005, CI had successfully restored at least partial hearing to over 100 000 people.8
Theoretically, the concept of a CI is simple. An external microphone receives sound waves from the environment, changes the mechanical energy into a digital signal through a speech processor, and converts that signal to an electrical impulse which travels through a wire placed strategically in the cochlea. Along the insulated wire, focal active points function as electrodes that stimulate nerve endings which trigger activation of the spiral ganglia and prompt a cascade of impulses ending in the brain for auditory interpretation.9
Because of the crude “robot-like” sound produced by the CI, the patient needs to train continuously with speech therapists to improve word recognition in a process called mapping. With successful surgery and therapy, patients will progress from near deafness to an acceptable level of effective communicative ability.10
While this may seem simple, we are limited by both the physical principles of the peripheral auditory system and knowledge of the appropriate biological and mechanical manipulative steps required for restoration of hearing. How to replicate approximately 15000 hair cells is still unknown, but recent trends have made improvements in sound quality and speech discrimination and recognition by changing the number and placement of electrodes in the cochlea, using hybrid implants, testing bilateral application of CI and altering the number of channels along with the rate of stimulation of the electrodes.
By manipulating the number and placement of electrodes along the cochlea, patients may improve pitch discrimination. A 2007 study showed that “turning off” (essentially reducing the number) of electrodes deeper than 560 degrees into the cochlea improved consonant and vowel recognition. There were two reasons for this improvement: 1) a reduction in interaction between overlapping apical electrodes (electrode interaction causes a degree of interference) and 2) an improvement of the alignment of the analysis filters to the estimated overall pitch.11
This combined technique or the insertion of a short-cochlear or hybrid implant may be ideal for older patients who have retained low but not high frequency hearing.12
These hybrid implants improve word and sentence recognition for profound high frequency hearing loss,10, 13-15
the most frequent of neurosensory hearing loss. In these patients, the implanted ear will receive electronic stimulation for the first 10-20 mm and acoustic residual hearing for the remainder of the cochlea. In the non-implanted ear, the patient will only hear the residual acoustic stimulation, perhaps with amplification of a hearing aid.12, 13, 15
Also, low frequency acoustic hearing improves pitch recognition which has positive implications in melody recognition.16
Research is also underway to improve sound localization and speech recognition in poor signal to background noise environments, such as “competitive talking”. Using bilateral application of CI has shown promising results thus far.17
Increasing the number of channels (from 8 to 16) and rate of electrode stimulation (813 pulses per second [pps] to 5 100 pps) has also allowed patients to discriminate signal from background allowing better word recognition.18
While many of these new technologies show promise and have allowed patients to improve their hearing and quality of life, there is still a great debate as to pros and cons of CI. Benefits have already been mentioned, i.e
. restoration of hearing to a degree and the ability to resume/restore viable communication, but it is important to note that CI do not “cure” the patient of hearing loss or restore “normal” hearing. CI patients have deteriorating inner ears. Hair cells are dying, and with hair cell death, important neurotrophic factors are not being supplied to sensory neurons. In time, these neurons will die and not even CI will enable sound perception. This area has shown promising developments as more has been discovered about delivering neurotrophic factors, independent of hair cells, in CI recipients to prolong nerve and thus, hearing, longevity.3, 19
Research is making strides to enhance sound quality and discrimination between signal and background noises 17
but this technology is still inferior to human natural functions. Furthermore, CI restrict laboratory tests such as magnetic resonance imaging and can be inconvenient (battery replacement, removal prior to water exposure, mechanical pieces). Cochlear implantation also has a negative societal stigma. The 2000 documentary “Sound and Fury”, highlighted the chasm among the deaf community. The deaf community has long rejected technology that allows a deaf person to hear.8
To this day, the National Association of the Deaf does not widely accept CI as a primary treatment option; a stance that alienates those in the deaf community who choose to undergo CI surgery.8
While implants and general knowledge of the ear have improved, CI is still not as good as human natural hearing and certainly will never provide a cure. Technology is quickly progressing so that someday we may be able to satisfactorily approximate the natural hearing.20
Meanwhile, molecular biology is racing ahead in genetic manipulation of existing cells as well as embryonic and adult stem cell technologies. With our growing knowledge base in functional genomics, it would be possible to over-express regulatory genes causing the transdifferentiation of supporting cells into hair cells.21
The drawback to this approach, however, is that it would deplete our supply of supporting cells, causing structural disorganization to the organ of Corti.21
Although stem cell technology is, in many ways, still in its infancy, in the long run it may be possible to differentiate inducible pluripotent stem cells (iPS cells), through a well designed series of intermediates, guided by numerous transcriptional and diffusible factors, to replace the lost hair cells. Although many major hurdles still exist, this treatment has the potential to become the gold standard for restoration of hearing loss.