Navigating is a complex cognitive task that places high demands on spatial abilities, particularly in the absence of sight. Significant advances have been made in identifying the neural correlates associated with various aspects of this skill, however, how the brain is able to navigate in the absence of visual experience remains poorly understood. Furthermore, how neural network activity relates to the wide variability in navigational independence and skill in the blind population is also unknown. Using fMRI, we investigated the neural correlates of audio-based navigation within a large scale, indoor virtual environment in early profoundly blind participants with differing levels of spatial navigation independence (assessed by the Santa Barbara Sense of Direction (SBSoD) scale). Performing path integration tasks in the virtual environment was associated with activation within areas of a core network implicated in navigation. Furthermore, we found a positive relationship between SBSoD scores and activation within right temporal parietal junction (TPJ) during the planning and execution phases of the task. These findings suggest that differential navigational ability in the blind may be related to the utilization of different brain network structures. Further characterization of the factors that influence network activity may have important implications regarding how this skill is taught in the blind community.
early blind; virtual environments; navigation; way finding; path integration; posterior parietal cortex; temporal parietal junction
Individuals using a visual-to-auditory sensory substitution device (SSD) called ‘The vOICe’ can identify objects in their environment through images encoded by sound. We have shown that identifying objects with this SSD is associated with activation of occipital visual areas. Here, we show that repetitive transcranial magnetic stimulation (rTMS) delivered to a specific area of occipital cortex (identified by functional MRI) profoundly impairs a blind user’s ability to identify objects. rTMS delivered to the same site had no effect on a visual imagery task. The task and site-specific disruptive effect of rTMS in this individual suggests that the cross-modal recruitment of occipital visual areas is functional in nature and critical to the patient’s ability to process and decode the image sounds using this SSD.
blindness; occipital cortex; plasticity; repetitive transcranial magnetic stimulation; sensory substitution
Audio-based Environment Simulator (AbES) is virtual environment software designed to improve real world navigation skills in the blind. Using only audio based cues and set within the context of a video game metaphor, users gather relevant spatial information regarding a building’s layout. This allows the user to develop an accurate spatial cognitive map of a large-scale three-dimensional space that can be manipulated for the purposes of a real indoor navigation task. After game play, participants are then assessed on their ability to navigate within the target physical building represented in the game. Preliminary results suggest that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building as indexed by their performance on a series of navigation tasks. These tasks included path finding through the virtual and physical building, as well as a series of drop off tasks. We find that the immersive and highly interactive nature of the AbES software appears to greatly engage the blind user to actively explore the virtual environment. Applications of this approach may extend to larger populations of visually impaired individuals.
Behavior and Behavior Mechanisms; Technology; Industry; Agriculture; virtual environments; action video games; blind; audio; rehabilitation; indoor navigation; spatial cognitive map
Growing evidence suggests that sensory deprivation is associated with dramatic crossmodal neuroplastic changes in the brain. In the case of visual and auditory deprivation, there is functional recruitment of brain areas normally associated with the sense that is lost by those sensory modalities that are spared. Furthermore, these changes seem to underlie adaptive and compensatory behaviours observed in both blind and deaf individuals. Although there are differences between these two populations due the very nature of the deprived sensory modality, there seem to be common principles regarding how the brain copes with sensory loss and the factors that influence how neuroplastic changes come about. Here, we discuss crossmodal neuroplasticity with regards to behavioural adaptation following sensory deprivation and highlight the possibility of maladaptive consequences within the context of rehabilitation.
For individuals who are blind, navigating independently in an unfamiliar environment represents a considerable challenge. Inspired by the rising popularity of video games, we have developed a novel approach to train navigation and spatial cognition skills in adolescents who are blind. Audio-based Environment Simulator (AbES) is a software application that allows for the virtual exploration of an existing building set in an action video game metaphor. Using this ludic-based approach to learning, we investigated the ability and efficacy of adolescents with early onset blindness to acquire spatial information gained from the exploration of a target virtual indoor environment. Following game play, participants were assessed on their ability to transfer and mentally manipulate acquired spatial information on a set of navigation tasks carried out in the real environment. Success in transfer of navigation skill performance was markedly high suggesting that interacting with AbES leads to the generation of an accurate spatial mental representation. Furthermore, there was a positive correlation between success in game play and navigation task performance. The role of virtual environments and gaming in the development of mental spatial representations is also discussed. We conclude that this game based learning approach can facilitate the transfer of spatial knowledge and further, can be used by individuals who are blind for the purposes of navigation in real-world environments.
early blind; adolescent; navigation; spatial cognition; gaming for learning; serious videogames; virtual environment
For profoundly blind individuals, navigating in an unfamiliar building can represent a significant challenge. We investigated the use of an audio-based, virtual environment called Audio-based Environment Simulator (AbES) that can be explored for the purposes of learning the layout of an unfamiliar, complex indoor environment. Furthermore, we compared two modes of interaction with AbES. In one group, blind participants implicitly learned the layout of a target environment while playing an exploratory, goal-directed video game. By comparison, a second group was explicitly taught the same layout following a standard route and instructions provided by a sighted facilitator. As a control, a third group interacted with AbES while playing an exploratory, goal-directed video game however, the explored environment did not correspond to the target layout. Following interaction with AbES, a series of route navigation tasks were carried out in the virtual and physical building represented in the training environment to assess the transfer of acquired spatial information. We found that participants from both modes of interaction were able to transfer the spatial knowledge gained as indexed by their successful route navigation performance. This transfer was not apparent in the control participants. Most notably, the game-based learning strategy was also associated with enhanced performance when participants were required to find alternate routes and short cuts within the target building suggesting that a ludic-based training approach may provide for a more flexible mental representation of the environment. Furthermore, outcome comparisons between early and late blind individuals suggested that greater prior visual experience did not have a significant effect on overall navigation performance following training. Finally, performance did not appear to be associated with other factors of interest such as age, gender, and verbal memory recall. We conclude that the highly interactive and immersive exploration of the virtual environment greatly engages a blind user to develop skills akin to positive near transfer of learning. Learning through a game play strategy appears to confer certain behavioral advantages with respect to how spatial information is acquired and ultimately manipulated for navigation.
early blind; late blind; navigation; spatial cognition; games for learning; videogames; virtual environment; near transfer of learning
We have previously reported that transcranial direct current stimulation (tDCS) delivered to the occipital cortex enhances visual functional recovery when combined with 3 months of computer-based rehabilitative training in patients with hemianopia. The principal objective of this study was to evaluate the temporal sequence of effects of tDCS on visual recovery as they appear over the course of training and across different indicators of visual function.
Primary objective outcome measures were i) shifts in visual field border and ii) stimulus detection accuracy within the affected hemifield. These were compared between patients randomized to either vision restoration therapy (VRT) combined with active tDCS or VRT paired with sham tDCS. Training comprised of 2 half hour sessions, 3 times a week for 3 months. Primary outcome measures were collected at baseline (pretest), monthly interim intervals, and at posttest (3 months). As secondary outcome measures, contrast sensitivity and reading performance were collected at pretest and posttest time-points only.
Active tDCS combined with VRT accelerated the recovery of stimulus detection as between-group differences appeared within the first month of training. In contrast, a shift in the visual field border was only evident at posttest (after 3 months of training). TDCS did not affect contrast sensitivity or reading performance.
These results suggest that tDCS may differentially affect the magnitude and sequence of visual recovery in a manner that is task- specific to the type of visual rehabilitative training strategy employed.
transcranial direct current stimulation (tDCS); brain stimulation; hemianopia; visual field; rehabilitation; vision restoration therapy (VRT)
A long-standing debate in cognitive neuroscience pertains to the innate nature of language development and the underlying factors that determine this faculty. We explored the neural correlates associated with language processing in a unique individual who is early blind, congenitally deaf, and possesses a high level of language function. Using functional magnetic resonance imaging (fMRI), we compared the neural networks associated with the tactile reading of words presented in Braille, Print on Palm (POP), and a haptic form of American Sign Language (haptic ASL or hASL). With all three modes of tactile communication, indentifying words was associated with robust activation within occipital cortical regions as well as posterior superior temporal and inferior frontal language areas (lateralized within the left hemisphere). In a normally sighted and hearing interpreter, identifying words through hASL was associated with left-lateralized activation of inferior frontal language areas however robust occipital cortex activation was not observed. Diffusion tensor imaging -based tractography revealed differences consistent with enhanced occipital-temporal connectivity in the deaf-blind subject. Our results demonstrate that in the case of early onset of both visual and auditory deprivation, tactile-based communication is associated with an extensive cortical network implicating occipital as well as posterior superior temporal and frontal associated language areas. The cortical areas activated in this deaf-blind subject are consistent with characteristic cortical regions previously implicated with language. Finally, the resilience of language function within the context of early and combined visual and auditory deprivation may be related to enhanced connectivity between relevant cortical areas.
deafness; blindness; tactile language; neuroplasticity; fMRI; diffusion tensor imaging
In this work we present the results of the cognitive impact evaluation regarding the use of Audiopolis, an audio and/or haptic-based videogame. The software has been designed, developed and evaluated for the purpose of developing orientation and mobility (O&M) skills in blind users. The videogame was evaluated through cognitive tasks performed by a sample of 12 learners. The results demonstrated that the use of Audiopolis had a positive impact on the development and use of O&M skills in school-aged blind learners.
Haptic and Audio Interfaces; Orientation; Mobility; People Who Are Blind; H.5.m. Information interfaces and presentation (e.g., HCI): Miscellaneous; Human Factors
Interactive digital technologies are currently being developed as a novel tool for education and skill development. Audiopolis is an audio and haptic based videogame designed for developing orientation and mobility (O&M) skills in people who are blind. We have evaluated the cognitive impact of videogame play on O&M skills by assessing performance on a series of behavioral tasks carried out in both indoor and outdoor virtual spaces. Our results demonstrate that the use of Audiopolis had a positive impact on the development and use of O&M skills in school-aged learners who are blind. The impact of audio and haptic information on learning is also discussed.
Haptic and Audio Interfaces; Orientation; Mobility; Navigation; People Who Are Blind; H.5.m. Information interfaces and presentation (e.g., HCI); Miscellaneous; Human Factors; Design; Measurement
Consistent evidence suggests that pitch height may be represented in a spatial format, having both a vertical and a horizontal representation. The spatial representation of pitch height results into response compatibility effects for which high pitch tones are preferentially associated to up-right responses, and low pitch tones are preferentially associated to down-left responses (i.e., the Spatial-Musical Association of Response Codes (SMARC) effect), with the strength of these associations depending on individuals’ musical skills. In this study we investigated whether listening to tones of different pitch affects the representation of external space, as assessed in a visual and haptic line bisection paradigm, in musicians and non musicians. Low and high pitch tones affected the bisection performance in musicians differently, both when pitch was relevant and irrelevant for the task, and in both the visual and the haptic modality. No effect of pitch height was observed on the bisection performance of non musicians. Moreover, our data also show that musicians present a (supramodal) rightward bisection bias in both the visual and the haptic modality, extending previous findings limited to the visual modality, and consistent with the idea that intense practice with musical notation and bimanual instrument training affects hemispheric lateralization.
musicians; pitch; space perception; line bisection; pseudoneglect
In the so-called McGurk illusion, when the synchronized presentation of the visual stimulus /ga/ is paired with the auditory stimulus /ba/, people in general hear it as /da/. Multisensory integration processing underlying this illusion seems to occur within the Superior Temporal Sulcus (STS). Herein, we present evidence demonstrating that bilateral cathodal transcranial direct current stimulation (tDCS) of this area can decrease the McGurk illusion-type responses. Additionally, we show that the manipulation of this audio-visual integrated output occurs irrespective of the number of eye-fixations on the mouth of the speaker. Bilateral anodal tDCS of the Parietal Cortex also modulates the illusion, but in the opposite manner, inducing more illusion-type responses. This is the first demonstration of using non-invasive brain stimulation to modulate multisensory speech perception in an illusory context (i.e., both increasing and decreasing illusion-type responses to a verbal audio-visual integration task). These findings provide clear evidence that both the superior temporal and parietal areas contribute to multisensory integration processing related to speech perception. Specifically, STS seems fundamental for the temporal synchronization and integration of auditory and visual inputs. For its part, posterior parietal cortex (PPC) may adjust the arrival of incoming audio and visual information to STS thereby enhancing their interaction in this latter area.
McGurk illusion; superior temporal; parietal cortex; transcranial direct current stimulation; multisensory integration; speech
Multisensory integration of information from different sensory modalities is an essential component of perception. Neurophysiological studies have revealed that audio-visual interactions occur early in time and even within sensory cortical areas believed to be modality-specific. Here we investigated the effect of auditory stimuli on visual perception of phosphenes induced by transcranial magnetic stimulation (TMS) delivered to the occipital visual cortex. TMS applied at subthreshold intensity led to the perception of phosphenes when coupled with an auditory stimulus presented within close spatiotemporal congruency at the expected retinotopic location of the phosphene percept. The effect was maximal when the auditory stimulus preceded the occipital TMS pulse by 40 ms. Follow-up experiments confirmed a high degree of temporal and spatial specificity of this facilitatory effect. Furthermore, audiovisual facilitation was only present at subthreshold TMS intensity for the phosphenes, suggesting that suboptimal levels of excitability within unisensory cortices may be better suited for enhanced cross-modal interactions. Overall, our findings reveal early auditory–visual interactions due to the enhancement of visual cortical excitability by auditory stimuli. These interactions may reflect an underlying anatomical connectivity between unisensory cortices.
Transcutaneous electrical stimulation has been proven to modulate nervous system activity, leading to changes in pain perception, via the peripheral sensory system, in a bottom up approach. We tested whether different sensory behavioral tasks induce significant effects in pain processing and whether these changes correlate with cortical plasticity.
This randomized parallel designed experiment included forty healthy right-handed males. Three different somatosensory tasks, including learning tasks with and without visual feedback and simple somatosensory input, were tested on pressure pain threshold and motor cortex excitability using transcranial magnetic stimulation (TMS). Sensory tasks induced hand-specific pain modulation effects. They increased pain thresholds of the left hand (which was the target to the sensory tasks) and decreased them in the right hand. TMS showed that somatosensory input decreased cortical excitability, as indexed by reduced MEP amplitudes and increased SICI. Although somatosensory tasks similarly altered pain thresholds and cortical excitability, there was no significant correlation between these variables and only the visual feedback task showed significant somatosensory learning.
Lack of correlation between cortical excitability and pain thresholds and lack of differential effects across tasks, but significant changes in pain thresholds suggest that analgesic effects of somatosensory tasks are not primarily associated with motor cortical neural mechanisms, thus, suggesting that subcortical neural circuits and/or spinal cord are involved with the observed effects. Identifying the neural mechanisms of somatosensory stimulation on pain may open novel possibilities for combining different targeted therapies for pain control.
Computer based video games are receiving great interest as a means to learn and acquire new skills. As a novel approach to teaching navigation skills in the blind, we have developed Audio-based Environment Simulator (AbES); a virtual reality environment set within the context of a video game metaphor. Despite the fact that participants were naïve to the overall purpose of the software, we found that early blind users were able to acquire relevant information regarding the spatial layout of a previously unfamiliar building using audio based cues alone. This was confirmed by a series of behavioral performance tests designed to assess the transfer of acquired spatial information to a large-scale, real-world indoor navigation task. Furthermore, learning the spatial layout through a goal directed gaming strategy allowed for the mental manipulation of spatial information as evidenced by enhanced navigation performance when compared to an explicit route learning strategy. We conclude that the immersive and highly interactive nature of the software greatly engages the blind user to actively explore the virtual environment. This in turn generates an accurate sense of a large-scale three-dimensional space and facilitates the learning and transfer of navigation skills to the physical world.
Once the topic of folklore and science fiction, the notion of restoring vision to the blind is now approaching a tractable reality. Technological advances have inspired numerous multidisciplinary groups worldwide to develop visual neuroprosthetic devices that could potentially provide useful vision and improve the quality of life of profoundly blind individuals. While a variety of approaches and designs are being pursued, they all share a common principle of creating visual percepts through the stimulation of visual neural elements using appropriate patterns of electrical stimulation. Human clinical trials are now well underway and initial results have been met with a balance of excitement and cautious optimism. As remaining technical and surgical challenges continue to be solved and clinical trials move forward, we now enter a phase of development that requires careful consideration of a new set of issues. Establishing appropriate patient selection criteria, methods of evaluating long-term performance and effectiveness, and strategies to rehabilitate implanted patients will all need to be considered in order to achieve optimal outcomes and establish these devices as viable therapeutic options.
The loss of vision has been associated with enhanced performance in non-visual tasks such as tactile discrimination and sound localization. Current evidence suggests that these functional gains are linked to the recruitment of the occipital visual cortex for non-visual processing, but the neurophysiological mechanisms underlying these crossmodal changes remain uncertain. One possible explanation is that visual deprivation is associated with an unmasking of non-visual input into visual cortex.
We investigated the effect of sudden, complete and prolonged visual deprivation (five days) in normally sighted adult individuals while they were immersed in an intensive tactile training program. Following the five-day period, blindfolded subjects performed better on a Braille character discrimination task. In the blindfold group, serial fMRI scans revealed an increase in BOLD signal within the occipital cortex in response to tactile stimulation after five days of complete visual deprivation. This increase in signal was no longer present 24 hours after blindfold removal. Finally, reversible disruption of occipital cortex function on the fifth day (by repetitive transcranial magnetic stimulation; rTMS) impaired Braille character recognition ability in the blindfold group but not in non-blindfolded controls. This disruptive effect was no longer evident once the blindfold had been removed for 24 hours.
Overall, our findings suggest that sudden and complete visual deprivation in normally sighted individuals can lead to profound, but rapidly reversible, neuroplastic changes by which the occipital cortex becomes engaged in processing of non-visual information. The speed and dynamic nature of the observed changes suggests that normally inhibited or masked functions in the sighted are revealed by visual loss. The unmasking of pre-existing connections and shifts in connectivity represent rapid, early plastic changes, which presumably can lead, if sustained and reinforced, to slower developing, but more permanent structural changes, such as the establishment of new neural connections in the blind.