|Home | About | Journals | Submit | Contact Us | Français|
The success of modern neural prostheses is dependent on a complex interplay between the devices’ hardware and software and the dynamic environment in which the devices operate: the patient’s body or ‘wetware’. Over 110,000 severe/profoundly deaf individuals presently receive information enabling auditory awareness and speech perception from cochlear implants. The cochlear implant therefore provides a useful case study for a review of the complex interactions between hardware, software and wetware, and of the important role of the dynamic nature of wetware. This review will examine the evidence of changes in the wetware contributing to changes in speech perception and discuss how these changes relate to electrophysiological and functional imaging studies in humans. The relationship between the human data and evidence from animals of the remarkable capacity for plastic change of the central auditory system, even into adulthood, will then be examined. Finally, we will discuss the role of brain plasticity in neural prostheses in general.
The cochlear implant is one of the most well known examples of a neural prosthetic device, rivalled only by the cardiac pacemaker and the emerging deep brain stimulation devices (used mainly for the control of movement disorders associated with Parkinson’s disease; for review see Kringelbach et al., 2007). When one is asked to picture a cochlear implant, an image of the implant hardware immediately springs to mind. A typical modern implant consists of two hardware components: a small speech processor worn behind the ear with a coil attached to it; and a small metal container and electrode assembly that is actually implanted into the patient. The images of cardiac pacemakers and deep brain stimulation systems aren’t radically different; small metal or ceramic containers that are implanted in the patient. These images of neural prostheses essentially being hardware devices, consisting of power supplies, stimulators, wires and electrodes, was a fairly accurate representation of early devices.
The modern neural prosthesis is far more ‘intelligent’ than early devices, combining information from a variety of sources to provide feedback control of the device. For example, cardiac pacemakers are able to monitor a range of parameters, including the heart’s rhythm and the patient’s level of activity, and adjust pacing strategies appropriately (Schaldach, 1993). This additional level of complexity, often hidden from the patient, has led to modern neural prostheses, like most modern devices, being a complex combination of hardware and software. The modern cochlear implant is no exception with embedded digital signal processors performing the speech processing. This has allowed software control over speech processing strategies and a range of other parameters that wasn’t possible with hardware-only devices. However, neural prostheses do not operate in isolation or on a workshop bench; they operate within the patient’s body. It is the patient’s body that provides the third and final component of a neural prosthesis which we will refer to as ‘wetware’ (Figure 1).
At the simplest level, wetware consists of the immediate biological environment surrounding the neural prosthesis and the interactions between the biological environment and the implant. Reviews covering biocompatible materials and, specifically, the electrode - tissue interface are covered in more detail elsewhere in this issue. However, wetware also consists of other elements of the biological system including muscles, hormones, nerves and the brain. Unlike the hardware and software components of a neural prosthesis, wetware is by its very nature dynamic and constantly changing. Often these changes are considered to be detrimental to performance, such as changes to tissue encapsulation of electrodes over time and with electrical stimulation (Newbold et al., 2004), neural tissue degradation as a result of electrode insertion trauma (Rennaker et al., 2005) and the continuing degeneration of sensory pathways (for review of the auditory system see Shepherd et al., 2006). However, not all changes result in reduced performance and some, such as the ability of the brain to undergo plastic reorganization, may actually lead to improved performance.
To highlight the possible beneficial effects of changes in wetware, this paper will review some of the evidence of plastic changes associated with the chronic use of neural prostheses. The cochlear implant provides an excellent case study as the best cochlear implant recipients now exhibit near-normal open-set speech perception, at least in quiet environments (Blamey et al., 1996). This is a far cry from the first report from Count Volta in the 18th century of hearing a sound like the boiling or bubbling of some thick soup after connecting one of his recently developed batteries to two metal rods inserted in his ears (Volta, 1800). Much of the improvement in speech perception of cochlear implant recipients over the last few decades can be attributed to engineering improvements in the hardware and software. There have also been increasing efforts to utilise the dynamic aspects of wetware with appropriate training and rehabilitative techniques (see for example Dornan et al., 2007; Paatsch et al., 2006). We will first review the clinical evidence of changes in the wetware contributing to changes in speech perception, and will then discuss how these relate to electrophysiological and functional imaging studies of the auditory cortex in humans. We will then examine the relationship between the human data and evidence from animal studies of plastic changes in the central auditory system. We will focus on the response properties of neurons in, and the functional organization of, the primary auditory cortex (AI), resulting from chronic intracochlear electrical stimulation (ICES) of the auditory nerve, with an emphasis on behaviourally relevant stimulation. Finally, we will discuss the role of brain plasticity in neural prostheses in general.
In normal hearing individuals, the inner ear or cochlea processes sound in a tonotopically organized manner, with high frequencies being processed at the base of the cochlea and low frequencies at the apex. Modern cochlear implants rely on this pitch-place code to provide frequency specific information via electrical activation of restricted regions of the cochlea. Therefore, it is not surprising that normal electrode pitch perception– the ability to discriminate between stimulation on different electrodes on the basis of pitch, and to rank the percepts in a manner consistent with a normal cochleotopic organization - is highly correlated with speech perception (Henry et al., 2000). However, the pitch-place code is a function of the wetware, and therefore electrode pitch perception is not fixed. It is possible to create a mismatch between cochlear location and the perceived pitch (i.e. change the pitch-place code), particularly in cochlear implant patients with some residual hearing (Reiss et al., 2007). Specifically, as it is possible to assign any portion of the acoustic frequency spectrum as the driving signal for a particular intra-cochlear electrode; patients can receive ICES derived from acoustic signals up to two octaves below those that would excite that cochlear region in normal hearing individuals. After some years of device use, the electrode pitch percepts of these patients come to match the programmed frequencies, rather than those predicted on the basis of cochlear position (Reiss et al., 2007). This change suggests top-down influences on the regions giving rise to the percept, from areas in which knowledge of the frequency composition of the language is stored. However, it should be noted that this finding of a dynamic pitch-place code is by no means universal (Vermeire et al., 2008), and the precise circumstances under which this remapping may be occurring are yet to be elucidated.
Another striking example of wetware changes is the universal finding that auditory experience is highly correlated with speech perception scores. This is most clearly demonstrated by the fact that word recognition scores in postlingually deaf implant patients, who have had a significant amount of auditory experience that allowed them to develop language skills before their hearing loss, are inversely correlated with the duration of deafness, and the ratio of duration of deafness to age of implantation has a negative impact on clinical performance (Blamey et al., 1996; Gantz et al., 1993; Govaerts et al., 2002; Kirk et al., 2002; Rubinstein et al., 1999; Sarant et al., 2001). The malleability of wetware is not limited to particular postlingual periods, as the absolute age of postlingually deaf patients does not appear to influence performance (Leung et al., 2005; Tyler and Summerfield, 1996). These results emphasise the capacity of the adult auditory system to undergo change, and contradict the age-old adage that you can’t teach an old dog new tricks.
In contrast to the seemingly enduring malleability of wetware in postlingually deaf patients, there is a marked effect of age at implantation for prelingually deaf patients, who have often had little or no auditory experience. If these patients are implanted as young adults, their temporal processing skills, as assessed by rate and gap detection tasks, are poor (Busby and Clark, 1999; Busby et al., 1993) and they do not exhibit normal electrode pitch percepts (Busby and Clark, 2000; Busby et al., 1992; Eddington et al., 1978; Tong et al., 1988). Not surprisingly, these patients also exhibit poor levels of speech perception (Busby and Clark, 1999; Busby et al., 1993; Dowell, 2002; Eddington et al., 1978). However, if implanted early, a majority of congenitally deaf children obtain open-set speech perception after 2–3 years of implant use at levels comparable to postlingually deaf adults (Dowell, 2002). Auditory experience with cochlear implants is vital for good speech perception in children (Blamey et al., 2001; Dawson et al., 1992; Dowell, 2002; Fryauf-Bertschy et al., 1997; Osberger et al., 1991; Sarant et al., 2001; Waltzman et al., 1992), highlighted by children with a congenital hearing loss, who initially show poorer language development than children with an acquired hearing loss, but whose performance rapidly improves with device use (Dettman et al., 2007). Importantly, this improvement in communication skills begins to match that seen in normal development if the children receive a cochlear implant under 12 months of age (Dettman et al., 2007). Not just any input into the wetware will result in the appropriate changes: It is clear that family and educational environments emphasizing listening and speaking play a significant role in speech perception among paediatric cochlear implant subjects (Moog and Geers, 2003; Sarant et al., 2001). Collectively, these findings suggest that there are critical periods during which appropriate input into the wetware can cause fundamental changes that are not possible during other epochs.
Studies of clinical performance in cochlear implant subjects are rich in examples of changing wetware, most notably the consistent emphasis on the positive influence of auditory experience on speech perception. However, it also appears that there is a critical period within which the wetware important for language development must receive appropriate input. However, it is unclear from the clinical data whether the critical periods are for language-specific wetware or for auditory processing in general. It is also important to emphasize that only about ~20% of the variance in the performance data can be accounted for by known factors, including: the aetiology of the hearing loss; the duration of deafness; the age at implantation; the age at onset of deafness; and the duration of implant use (Blamey et al., 1996). There remain other factors, as yet unidentified but undoubtedly including wetware, that contribute significantly to clinical performance in implant subjects.
For the purpose of this review, the wetware in which changes might underlie the clinical changes describe above can be divided into three broad categories: temporal processing wetware; spatial processing wetware (e.g. place-pitch coding mechanisms or the cochleotopic organization of the auditory pathway); and language-specific wetware. Obviously, changes in either the temporal or spatial wetware will also alter the input to the language-specific wetware. It is clear from the clinical evidence that wetware changes can occur not only with cochlear implant use, but also in the preceding period of deafness. Therefore, the following review of clinical electrophysiological and functional imaging data will include both the effects of long-term deafness and cochlear implant use.
Clinical electrophysiological techniques have provided evidence for critical periods for plasticity in wetware involved in temporal processing. Clinical studies of the P1 evoked potential, generated by both auditory thalamic and cortical sources (Sharma et al., 2005a), indicate that there is a sensitive period, ending around 3.5 years of age, during which the human central auditory pathway is maximally plastic (Eggermont and Ponton, 2003; Ponton et al., 1996; Ponton and Eggermont, 2001; Sharma et al., 2002). Deaf children who receive effective auditory input, via a cochlear implant, within this period develop electrically evoked cortical potentials with latencies (~ 100 ms) that reach those of aged-matched normal-hearing children within 6 months of implantation (Sharma et al., 2005a). However, while the recorded potentials suggest near-normal maturation of middle (IV & deep III) cortical layers, there continues to be altered maturation of input to the superficial (II, upper III) layers (Ponton and Eggermont, 2001). In contrast, congenitally deaf children who receive an implant after the age of 7 exhibit incomplete maturation of their electrically evoked cortical potentials, including latencies that are always longer than aged-matched controls. It has therefore been suggested that the latency of the P1 evoked potential could be a useful diagnostic tool for assessing the developmental status of the auditory system (Sharma and Dorman, 2006; Sharma et al., 2005b), and to assess whether the wetware is getting appropriate stimulus to develop normally.
Modern imaging techniques for measuring brain activity in humans have provided a range of evidence for wetware changes following a profound hearing loss (Berthezene et al., 1997; Giraud et al., 2001; Hari et al., 1988; Herzog et al., 1991; Ito, 1993; Ito et al., 1993; Lazeyras et al., 2002; Nishimura et al., 2000; Okazawa et al., 1996; Pelizzone et al., 1986). Collectively, these studies report low levels of auditory cortical activity in profoundly deaf subjects - the longer the duration of deafness, the lower the level of activity recorded. However, the plastic nature of wetware can lead to the ‘take over’ of normally auditory areas by other sensory modalities, particularly the secondary auditory areas (supratemporal gyrus / perisylvian region) normally used for auditory processing and language (Hickok et al., 1997; Nishimura et al., 1999; Petitto et al., 2000; Sadato et al., 2004). There is also one report of the recruitment of primary auditory cortex (AI), normally considered a purely auditory area, in the profoundly deaf for processing visual stimuli (Finney et al., 2001), although the extent of the take-over was limited to a small region of the right, but not the left, AI. There have been reports of positive correlations between low resting metabolic activity in AI prior to cochlear implantation and post-implantation speech perception scores for the prelingual deaf (Lee et al., 2001; Lee et al., 2007). This finding suggests that the best clinical outcomes for cochlear implant patients may in fact occur with the most immature auditory cortex, or the most naïve spatial and temporal wetware.
Cochlear implantation results in an increase in metabolic activity in AI to near-normal levels, with greater activity on the side contralateral to the implant (Lazeyras et al., 2002), suggesting a change in the spatial and/or the temporal wetware. It has also been reported that the magnitude of the increase in activity appears to be correlated with the performance of the implant patient (Green et al., 2005; Lee et al., 2007). Interestingly, the activity in ‘higher-order’ auditory centres of prelingual deaf patients is reported to decrease with cochlear implant experience (Lee et al., 2001), and to be lower in these patients than in postlingual deaf implant patients (Naito et al., 1997). Clearly, development of the wetware associated with specialization in auditory association areas is driven by auditory experience (Giraud et al., 2001).
Despite the ongoing improvements in clinical electrophysiological and functional imaging techniques, it is still difficult, if not impossible, to precisely determine the physiological correlates of the clinical evidence in human patients, as demonstrated by the relatively sparse observations presented above. Therefore, we will now review the evidence of plasticity in the central auditory pathway, focusing on the auditory cortex, from the much more extensive animal literature. Recently, some of the cellular and molecular changes associated with changes in afferent input to the auditory cortex have been reported (Tan et al., 2007); however, the electrophysiological correlates of such changes are less clear.
Before we start, there are three important issues to note. First, not all changes in neural responsiveness and organization are necessarily plastic in nature, as some changes can be explained as passive consequences of altered input. For example destruction of cochlear outer hairs cells leads to an immediate change in the frequency tuning of auditory nerve fibres, and consequently of neurons throughout the auditory pathway (Dallos and Harris, 1978). Unfortunately, it is not always a simple matter to distinguish between plastic and non-plastic changes (Calford, 2002; Irvine and Wright, 2005); however, we will define plasticity as involving some form of active or dynamic modification of neural properties resulting from the altered input. Second, as previously mentioned, wetware changes can occur not only with cochlear implant use, but also in the preceding period of deafness. Many of the wetware changes associated with a long-term sensorineural hearing loss occur in the periphery, including: significant reduction in the number of spiral ganglion neurons; shrinkage of the soma of spiral ganglion neurons; de-myelination of residual spiral ganglion neuron somata and possibly part of their central processes; and reduced spontaneous activity throughout the auditory pathway (for review see Shepherd et al., 2006). These deafness-induced peripheral wetware changes complicate the interpretation of plasticity in the auditory cortex as they affect the input, and the organization of that input, into the auditory cortex. Similarly, any plastic changes in the auditory cortex associated with chronic implant use are equally difficult to disentangle from peripheral and subcortical wetware changes. Finally, as we have focused on plastic changes within the auditory cortex, we will also focus on studies that have used behaviourally relevant chronic ICES wherever possible. It is clear from the clinical evidence that not just any input into the wetware will result in the appropriate changes; the input must contain behaviourally important information.
In response to ICES, neurons in AI of normal hearing (or acutely deafened) cats have well characterized input-output functions (with the majority of neurons exhibiting a monotonic increase in activation with increasing stimulus level), a 10-dB dynamic range, and minimum first spike latencies of around 8 ms (for review see Fallon et al., 2008). Cortical field potentials exhibit both a short- (< 80 ms) and long- (~ 150 ms) latency response (Hartmann et al., 1997; Popelar et al., 1995). Either a short period of profound deafness (~2 weeks) in an adult animal (Raggio and Schreiner, 1999), or longer periods of deafness, including the early developmental period (Fallon et al., 2009; Hartmann et al., 1997; Raggio and Schreiner, 1999), results in a decrease in threshold of neurons in AI. The effect on dynamic range is less clear, with some studies reporting an increase in dynamic range (Hartmann et al., 1997; Raggio and Schreiner, 1999) while other observed no change (Fallon et al., 2009). Cortical field potentials in congenitally deaf cats are reduced in size and exhibit only a middle-latency response, with no long-latency responses evident (Klinke et al., 2001; Klinke et al., 1999). There are pronounced changes in current sinks (and therefore presumably synaptic currents) in different AI layers in these cats, with a decrease in current sinks at long (>30ms) latencies in layers II, III and IV, and a decrease in the deeper (infragranular) layers IV, V and VI, at all latencies (Kral et al., 2000; Kral et al., 2001). These are the only reports of changes in the temporal response characteristics of AI neurons (i.e. there are no reports of changes in minimum latency, response jitter and maximum following rate). This is particularly surprising, given the occurrence of significant down-stream changes, including a decrease in temporal processing in the inferior colliculus (IC) of neonatally deafened cats. At this level, there are increases in both the minimum latency and response jitter, and a decrease in the maximum following rate of individual neurons following a long-term sensorineural hearing loss (Shepherd et al., 1999; Snyder et al., 1995). As with the clinical data, there is some suggestion of the activation of AI by visual input (Rebillard et al., 1980), although there is no evidence of visual responses in AI of congenitally deaf cats (Kral et al., 2003; Stewart and Starr, 1970).
There are few reports of the effects of chronic, behaviourally relevant ICES of the auditory nerve delivered from an early age on single- and multi-unit response properties in AI. However, has been shown that chronic ICES can partially reverse the decrease in threshold seen in long-term deaf animals (Fallon et al., 2009), whereas dynamic range appears unaffected. Chronic ICES results in larger current source densities, particularly in layers II & III, resembling those in normal hearing animals (Klinke et al., 1999). Cortical field potentials in chronically stimulated animals, comprising both middle- and long-latency components, are similar to those in normal-hearing animals (Klinke et al., 2001). That is, there is an increase in the amplitude of the field potentials compared to deaf controls. Chronic ICES results in more sustained single- and multi-unit activity than in unstimulated deaf controls (Klinke et al., 1999); and a preliminary report suggests an increase in first spike latency and an increase in the maximum following rate (Fallon et al., 2007). These changes to the temporal processing wetware are difficult to interpret. An increase in first spike latency may suggest a decrease in temporal processing, whereas the increase in maximum following rate indicates improved temporal processing. This is in stark contrast to the changes observed down-stream in the IC, albeit with naïve chronic ICES. At this level chronic ICES results in a decrease in minimum latency and response jitter and increased maximum following frequency (Snyder et al., 1995; Vollmer et al., 2005; Vollmer et al., 1999), indicative of an increase in the precision of the temporal processing wetware. Interestingly, the temporal processing of electrical stimuli in the IC of chronically stimulated animals is reported to be superior to that in normal hearing animals.
As observed clinically, the type and duration of post-implantation auditory experience, or lack thereof, play a critical role in shaping wetware. While chronic ICES delivered from an early age allows the development of near normal current source densities, this effect is diminished with increasing delays in the initiation of the chronic ICES (Klinke et al., 1999). There is also a critical period during which the production of a range of neurotrophic factors important for dendritic growth and synaptic formation is influenced by a complex interaction between neural activity and developmental cues (for review see Kral et al., 2006). During this critical period, changes can be driven by simple ‘passive’ experience which in an adult animal for little or no effect (for review see Kral et al., 2006).
In summary, it appears that chronic, behaviourally relevant ICES of the AN, delivered from an early age, allows an experience-dependent maturation of the basic response properties of individual neurons within AI, albeit not exactly as would have occurred in a normal hearing animal.
As previously mentioned, modern cochlear implants rely on the spatial processing wetware to provide frequency specific information, however neither place-pitch coding nor the cochleotopic organization of the auditory pathway are immutable. There are three metrics of the spatial processing wetware that can be assessed. First, each location within AI should preferentially be activated by stimulation of a particular region of the cochlea. This feature of the cortical response is analogous to the preferential activation of different cortical locations by different frequencies of acoustic stimulation, and will therefore be referred to as ‘local tuning’. Second, these regions of selective activation should exhibit a functional cochleotopic organization along a predominantly caudal-rostral axis, which is the analogue of the well-studied tonotopic organization to acoustic stimulation (for review see Clarey et al., 1992); this feature will be referred to as cochlea-to-cortex mapping. Third, increasing stimulus intensity should recruit progressively larger amount of AI; this feature will be referred to as ‘cortical spread’.
In acutely deafened adult cats more than 90% of locations within AI are preferentially activated by ICES of a restricted region of the cochlea, with an increase in threshold of approximately 50% of the dynamic range for each millimetre shift along the basilar membrane (Fallon et al., 2009). This selectively is largely unaffected by either long-term deafness or chronic ICES. Interestingly, this selectively is similar to that seen with acoustic stimulation, suggesting that using this metric the wetware appears fixed.
We have already reported clinical evidence that the spatial processing wetware does change, so it should come as no surprise that the normal cochlea-to-cortex mapping of a 1-mm of shift along the basilar membrane corresponding to a 1–2 mm shift along the caudal-rostral axis of AI is affected by auditory experience (Figure 2). A short period of profound deafness (~2 weeks) results in little change to the mapping (Raggio and Schreiner, 1999). The effects of longer periods of deafness, including the early developmental period, appear to be influenced by aetiology. Neonatal deafening in the cat, via ototoxic drugs, results in the complete absences of any effective hearing and the subsequent loss of the orderly mapping (Fallon et al., 2009; Raggio and Schreiner, 1999); congenitally deaf cats are reported to maintain a rudimentary mapping (Hartmann et al., 1997; Klinke et al., 1999; Kral et al., 2001; Kral et al., 2002). The loss of the normal cochlea-to-cortex mapping in AI is in contrast to the electrophysiological evidence from lower centres, most notably the IC. At this level, the normal mapping is maintained even after extended periods of deafness (Leake et al., 2000; Moore et al., 2002; Shepherd et al., 1999; Snyder et al., 1990). Chronic, behaviourally relevant ICES, delivered from a young age, is able to completely ameliorate the deafness-induced changes and results in a normal cochlea-to-cortex mapping (Fallon et al., 2009). This remarkable effect of ICES on the spatial processing wetware is likely to contribute, at least in part, to the near-normal open-set speech perception of prelingually deaf patients implanted at a young age.
The final metric used to assess changes in the spatial processing wetware is cortical spread. In the cat, cortical spread changes during maturation, being largest one to two months after birth, and reaching adult-like levels around 4 months of age (Kral et al., 2005). Congenital deafness results in a delay in maturation of approximately 2 months, but with near normal adult responses (Kral et al., 2005). In contrast, ototoxically induced deafness, for even a short period (~2 weeks), results in an increase in the area of cortex activated, primarily in the caudal-rostral extent (Dinse et al., 1997; Fallon et al., 2009; Raggio and Schreiner, 1999). These results are in contrast to the normal spread of activation in the IC after long-term deafness (Leake et al., 2000; Snyder et al., 1990). The effects of chronic, behaviourally relevant ICES appear to be dependent on the aetiology of the deafness. In congenitally deaf cats, ICES results in an increase in cortical spread over and above that seen in unstimulated deaf controls (Klinke et al., 1999; Kral et al., 2001; Kral et al., 2002). Longer periods of ICES resulted in even more cortical spread (Klinke et al., 2001; Kral and Tillein, 2006), provided animals were implanted before approximately six months of age. In contrast, chronic ICES of ototoxically-deafened neonatal animals has been reported to either cause no change in cortical spread, compared to deaf controls (Fallon et al., 2009), or an increase in cortical spread (Dinse et al., 2003; Dinse et al., 1997). To reconcile the differences in ototoxically deafened animals it is important to note that Fallon et al. (2009) recorded single- and multi-unit activity predominantly from the middle cortical layers of AI, reflecting predominately thalamo-cortical input; whereas Dinse et al. (2003) used optical imaging recordings predominantly reflecting activity in the superficial cortical layers. In addition, Fallon et al. (2009) stimulated using a modified commercial cochlear implant while the precise nature of the stimulation strategy used by Dinse et al. (2003) is unclear. It is known that chronic simultaneous ICES of two cochlear sectors results in a marked expansion and fusing together of the representation of the two sectors in the IC (Leake et al., 2000), while non-simultaneous ICES maintains - or even sharpens - the selectivity of representations of those sectors in the IC (Leake et al., 2000; Snyder et al., 1990). Clearly, normal development of the spatial processing wetware is dependent on appropriate input; however, cochlear implant use appears to prevent or reverse many of the deafness-induced changes.
It is clear from the clinical data that there are critical periods during which it is important to ensure appropriate stimuli are available to allow the wetware required for good speech perception to develop normally. It was generally believed that this was also true for the development of temporal and spatial processing wetware (Hensch, 2004). Specifically, it was believed that changes in experience during these critical periods - when neuronal pathways and connections were being formed - but not later in life could drive changes in wetware. However, there is a growing body of literature reporting that given the appropriate patterns of behaviorally significant input, adult sensory systems (for review see Kaas and Florence, 2001), and the adult auditory system in particular (for reviews see Irvine, 2007; Weinberger, 2007), are capable of undergoing plastic changes. Here it is important to stress that there do appear to be critical periods during which passive experience or naïve input can drive change, and after the closure of these periods input with behaviourally significant content is required. This is highlighted in a number of studies that are of particular interest for this review. First are the studies by Zhou and Merzenich (2007; 2009), who reported that directed training can effectively change the temporal and spatial processing wetware in adult rats who would otherwise exhibit developmental impairments. Second is the study by Bao et al. (2004), who reported that specific training can effectively change the temporal processing wetware in adult rats. However, the precise conditions under which significant changes to the wetware can occur are still yet to be fully elucidated. The difficultly of this problem is highlighted by the fact that the closure of the early critical period is not only dependent on the input received (Chang and Merzenich, 2003; Zhang et al., 2001), but can even be different for different parts of AI (de Villers-Sidani et al., 2008). The continuing malleability of the temporal and processing wetware does raise the hope that, given appropriate directed-training regimes, it might be possible to unlock the closure of the critical period of speech-processing-specific wetware.
We have used the cochlear implant as a case study to highlight the role of brain plasticity in the efficacy of modern neural prostheses. There are obvious correlates between cochlear implants and the emerging field of retinal implants or bionic eyes, and therefore the role of malleable wetware in these prostheses, and in fact all ‘sensory neural prostheses’, should be apparent. The wetware seems capable of learning to interpret abnormal (and in many ways impoverished) input to achieve near normal perception. Perhaps less obvious is the role of brain plasticity, or wetware changes, in ‘motor neural prostheses’ such as functional electrical stimulation and brain computer interfaces, where the wetware is working as an output, rather than an input, device. A clear example has recently come from the field of directly controlling prosthetics using cortical activity. Until recently, this control was achieved by recording the activity of neural populations from the motor cortex and then applying sophisticated algorithms to decode the intended movement, map this intention into an appropriate external space, and finally calculate the control signal for the prosthesis (for example see Hochberg et al., 2006). However, Moritz et al. (2008) recently reported that an alternative to the complex processing was to use a very simple algorithm and let the subject learn how to control the prosthesis. In fact, they reported that neurons could control prosthesis movement equally well regardless of any previous association to movement. This finding suggests that, rather than attempting to decode the neural activity with external software and hardware, we might be better off leaving much of the problem to the wetware.
There are, however, limits to what the wetware can and cannot do, and ultimately the best clinical outcomes will come by learning to utilise the specific capabilities of wetware whenever possible. Perhaps the area with the biggest challenge in harnessing the wetware, but also with the most to gain, is in closed-loop neural prostheses, typified by the closed-loop control of epilepsy (Fountas and Smith, 2007). In this situation, the wetware is acting as both an input and an output to the neural prosthesis, and it will require the correct combination of engineering, physiological and clinical insight to achieve the best clinical outcomes. Finally, it is worth remembering that human cortex, just one part of the wetware, contains trillions of synapses that are estimated to process information at the equivalent of hundreds of millions of millions of computer instruction per second; but, just as important, is that those trillions of synapses are in a constant state of flux resulting in the wetware not only being amazingly powerful, but also staggeringly plastic.
The authors’ research is supported by NIDCD (HHS-N-263-2007-00053-C) and The Bionic Ear Institute, who wish to acknowledge the receipt of Operational Infrastructure Support from the Victorian Government.