The Argus™ II 60 channel epiretinal prosthesis has been developed in order to provide partial restoration of vision to subjects blinded from outer retinal degenerative disease. To date the device has been implanted in 21 subjects as part of a feasibility study. In 6 month post-implantation door finding and line tracking orientation and mobility testing, subjects have shown improvements of 86% and 73%, respectively, for system on vs. system off. In high-contrast Square Localization tests using a touch screen monitor 87% of tested subjects performed significantly better with the system on compared with off. These preliminary results show that the Argus II system provides some functional vision to blind subjects.
Retinal prostheses, which restore partial vision to patients blinded by outer retinal degeneration, are currently in clinical trial. The Argus II retinal prosthesis system was recently awarded CE approval for commercial use in Europe. While retinal prosthesis users have achieved remarkable visual improvement to the point of reading letters and short sentences, the reading process is still fairly cumbersome. This study investigates the possibility of using an epiretinal prosthesis to stimulate visual braille as a sensory substitution for reading written letters and words. The Argus II retinal prosthesis system, used in this study, includes a 10 × 6 electrode array implanted epiretinally, a tiny video camera mounted on a pair of glasses, and a wearable computer that processes the video and determines the stimulation current of each electrode in real time. In the braille reading system, individual letters are created by a subset of dots from a 3 by 2 array of six dots. For the visual braille experiment, a grid of six electrodes was chosen out of the 10 × 6 Argus II array. Groups of these electrodes were then directly stimulated (bypassing the camera) to create visual percepts of individual braille letters. Experiments were performed in a single subject. Single letters were stimulated in an alternative forced choice (AFC) paradigm, and short 2–4-letter words were stimulated (one letter at a time) in an open-choice reading paradigm. The subject correctly identified 89% of single letters, 80% of 2-letter, 60% of 3-letter, and 70% of 4-letter words. This work suggests that text can successfully be stimulated and read as visual braille in retinal prosthesis patients.
retina; epiretinal prosthesis; sensory substitution; retinitis pigmentosa; blindness; perception; degeneration; sight restoration
Retinal prosthesis systems (RPS) are a novel treatment for profound vision loss in outer retinal dystrophies. Ideal prostheses would offer stable, long-term retinal stimulation and reproducible spatial resolution in a portable form appropriate for daily life.
We report a prospective, internally controlled, multicentre trial of the Argus II system. Twenty-eight subjects with light perception vision received a retinal implant. Controlled, closed-group, forced-choice letter identification, and, open-choice two-, three- and four-letter word identification tests were carried out.
The mean±SD percentage correct letter identification for 21 subjects tested were: letters L, T, E, J, F, H, I, U, 72.3±24.6% system on and 17.7±12.9% system off; letters A, Z, Q, V, N, W, O, C, D, M, 55.0±27.4% system on and 11.8%±10.7% system off, and letters K, R, G, X, B, Y, S, P, 51.7±28.9% system on and 15.3±7.4% system off. (p<0.001 for all groups). A subgroup of six subjects was able to consistently read letters of reduced size, the smallest measuring 0.9 cm (1.7°) at 30 cm, and four subjects correctly identify unrehearsed two-, three- and four-letter words. Average implant duration was 19.9 months.
Multiple blind subjects fitted with the Argus II system consistently identified letters and words using the device, indicating reproducible spatial resolution. This, in combination with stable, long-term function, represents significant progress in the evolution of artificial sight.
We studied the capabilities of the Argus II retinal prosthesis for guiding fine hand movement, and demonstrated and quantified guidance improvement when using the device over when not using the device for progressively less predictable trajectories.
A total of 21 patients with retinitis pigmentosa (RP), remaining vision no more than bare light perception, and an implanted Argus II epiretinal prostheses used a touchscreen to trace white paths on black backgrounds. Sets of paths were divided into three categories: right-angle/single-turn, mixed-angle/single-turn, and mixed-angle/two-turn. Subjects trained on paths by using prosthetic vision and auditory feedback, and then were tested without auditory feedback, with and without prosthetic vision. Custom software recorded position and timing information for any contact that subjects made with the screen. The area between the correct path and the trace, and the elapsed time to trace a path were used to evaluate subject performance.
For right-angle/single-turn sets, average tracing error was reduced by 63% and tracing time increased by 156% when using the prosthesis, relative to residual vision. With mixed-angle/single-turn sets, error was reduced by 53% and time to complete tracing increased by 184%. Prosthesis use decreased error by 38% and increased tracing time by 252% for paths that incorporated two turns.
Use of an epiretinal visual prosthesis can allow RP patients with no more than bare light perception to guide fine hand movement visually. Further, prosthetic input tends to make subjects slower when performing tracing tasks, presumably reflecting greater effort. (ClinicalTrials.gov number, NCT00407602.)
A total of 21 blind retinitis pigmentosa patients used retinal prostheses to visually guide their hands to trace high-contrast paths of varying complexities. Prosthesis use decreased performance error by an average of 60% and increased time to complete the task by an average of 211%. Use of an epiretinal visual prosthesis can allow RP patients with no more than bare light perception to visually guide fine hand movement. Further, prosthetic input tends to make subjects slower when performing tracing tasks, presumably reflecting greater effort.
It has been hypothesized that a vision prosthesis capable of evoking useful visual percepts can be based upon electrically stimulating the primary visual cortex (V1) of a blind human subject via penetrating microelectrode arrays. As a continuation of earlier work, we examined several spatial and temporal characteristics of V1 microstimulation.
An array of 100 penetrating microelectrodes was chronically implanted in V1 of a behaving macaque monkey. Microstimulation thresholds were measured using a two-alternative forced choice detection task. Relative locations of electrically-evoked percepts were measured using a memory saccade-to-target task.
The principal finding was that two years after implantation we were able to evoke behavioural responses to electric stimulation across the spatial extent of the array using groups of contiguous electrodes. Consistent responses to stimulation were evoked at an average threshold current per electrode of 204 ± 49 µA (mean ± std) for groups of four electrodes and 91 ± 25 µA for groups of nine electrodes. Saccades to electrically-evoked percepts using groups of nine electrodes showed that the animal could discriminate spatially distinct percepts with groups having an average separation of 1.6 ± 0.3 mm (mean ± std) in cortex and 1.0 ± 0.2 degrees in visual space.
These results demonstrate chronic perceptual functionality and provide evidence for the feasibility of a cortically-based vision prosthesis for the blind using penetrating microelectrodes.
It has been theorized that sensorimotor processing deficits underlie Parkinson’s disease (PD) motor impairments including movement under proprioceptive control. However, it is possible that these sensorimotor processing deficits exclude tactile/proprioception sensorimotor integration: prior studies show improved movement accuracy in PD with endpoint tactile feedback, and good control in tactile-driven precision-grip tasks.
To determine whether tactile/proprioceptive integration in particular is affected by PD, nine subjects with PD (off-medication, UPDRS motor=19-42) performed an arm-matching task without visual feedback. In some trials one arm touched a static tactile cue that conflicted with dynamic proprioceptive feedback from biceps brachii muscle vibration. This sensory conflict paradigm has characterized tactile/proprioceptive integration in healthy subjects as specific to the context of tactile cue mobility assumptions and the intention to move the arm.
We found that the individuals with PD had poorer arm-matching acuracy than age-matched control subjects. However, PD-group accuracy improved with tactile feedback. Furthermore, sensory conflict conditions were resolved in the same context-dependent fashion by both subject groups. We conclude that the somatosensory integration mechanism for prioritizing tactile and proprioception feedback in this task are not disrupted by PD, and are not related to the observed proprioceptive deficits.
Parkinson’s disease; proprioception; touch; sensory integration
To restore functional form vision, epiretinal prostheses have been implanted in blind human subjects to electrically elicit percepts. These findings suggest that frequency modulation may be the best way to produce percepts that range widely in brightness while minimizing loss of spatial resolution.
In an effort to restore functional form vision, epiretinal prostheses that elicit percepts by directly stimulating remaining retinal circuitry were implanted in human subjects with advanced retinitis pigmentosa RP). In this study, manipulating pulse train frequency and amplitude had different effects on the size and brightness of phosphene appearance.
Experiments were performed on a single subject with severe RP (implanted with a 16-channel epiretinal prosthesis in 2004) on nine individual electrodes. Psychophysical techniques were used to measure both the brightness and size of phosphenes when the biphasic pulse train was varied by either modulating the current amplitude (with constant frequency) or the stimulating frequency (with constant current amplitude).
Increasing stimulation frequency always increased brightness, while having a smaller effect on the size of elicited phosphenes. In contrast, increasing stimulation amplitude generally increased both the size and brightness of phosphenes. These experimental findings can be explained by using a simple computational model based on previous psychophysical work and the expected spatial spread of current from a disc electrode.
Given that amplitude and frequency have separable effects on percept size, these findings suggest that frequency modulation improves the encoding of a wide range of brightness levels without a loss of spatial resolution. Future retinal prosthesis designs could benefit from having the flexibility to manipulate pulse train amplitude and frequency independently (clinicaltrials.gov number, NCT00279500).
Action requires knowledge of our body location in space. Here we asked if interactions with the external world prior to a reaching action influence how visual location information is used. We investigated if the temporal synchrony between viewing and feeling touch modulates the integration of visual and proprioceptive body location information for action. We manipulated the synchrony between viewing and feeling touch in the Rubber Hand Illusion paradigm prior to participants performing a ballistic reaching task to a visually specified target. When synchronous touch was given, reaching trajectories were significantly shifted compared to asynchronous touch. The direction of this shift suggests that touch influences the encoding of hand position for action. On the basis of this data and previous findings, we propose that the brain uses correlated cues from passive touch and vision to update its own position for action and experience of self-location.
Parietal Cortex; Touch; Action; Human Body; Body Location; Rubber Hand Illusion
Selective attention allows us to focus on particular sensory modalities and locations. Relatively little is known about how attention to a sensory modality may relate to selection of other features, such as spatial location, in terms of brain oscillations, although it has been proposed that low-frequency modulation (α- and β-bands) may be key. Here, we investigated how attention to space (left or right) and attention to modality (vision or touch) affect ongoing low-frequency oscillatory brain activity over human sensory cortex. Magnetoencephalography was recorded while participants performed a visual or tactile task. In different blocks, touch or vision was task-relevant, whereas spatial attention was cued to the left or right on each trial. Attending to one or other modality suppressed α-oscillations over the corresponding sensory cortex. Spatial attention led to reduced α-oscillations over both sensorimotor and occipital cortex contralateral to the attended location in the cue-target interval, when either modality was task-relevant. Even modality-selective sensors also showed spatial-attention effects for both modalities. The visual and sensorimotor results were generally highly convergent, yet, although attention effects in occipital cortex were dominant in the α-band, in sensorimotor cortex, these were also clearly present in the β-band. These results extend previous findings that spatial attention can operate in a multimodal fashion and indicate that attention to space and modality both rely on similar mechanisms that modulate low-frequency oscillations.
multisensory; magnetoencephalography; oscillations; alpha; beta; visual; somatosensory; prestimulus
This study evaluates the Argus™ II Retinal Prosthesis System in blind subjects with severe outer retinal degeneration.
The study design is a single arm, prospective, multicenter clinical trial.
Thirty subjects were enrolled in the United States and Europe between 6 June 2007 and 11 August 2009. All subjects were followed for a minimum of six months and up to 2.7 years.
The electronic stimulator and antenna of the implant was sutured onto the sclera using an encircling silicone band. Next, a pars plana vitrectomy was performed and the electrode array and cable were introduced into the eye via a pars plana sclerotomy. The microelectrode array was then tacked to the epiretinal surface.
Main Outcome Measures
The primary safety endpoint for the trial was the number, severity, and relation of adverse events. Principal performance endpoints were assessments of visual function as well as performance on orientation and mobility tasks.
Subjects performed statistically better with system ON vs. OFF in the following tasks: object localization (96% of subjects); motion discrimination (57%); and discrimination of oriented gratings (23%). The best recorded visual acuity to date is 20/1260. Subjects’ mean performance on Orientation and Mobility tasks was significantly better when the System was ON vs. OFF.
Seventy percent of the patients did not have any serious adverse events (SAEs). The most common SAE reported was either conjunctival erosion or dehiscence over the extraocular implant and was successfully treated in all subjects except in one which required explantation of the device without further complications.
The long-term safety results of Second Sight’s retinal prosthesis system are acceptable and the majority of subjects with profound visual loss perform better on visual tasks with system than without.
In performing search tasks, the visual system encodes information across the visual field at a resolution inversely related to eccentricity and deploys saccades to place visually interesting targets upon the fovea where resolution is highest. The serial process of fixation, punctuated by saccadic eye movements, continues until the desired target has been located. Loss of central vision restricts the ability to resolve the high spatial information of a target, interfering with this visual search process. We investigate oculomotor adaptations to central visual field loss with gaze-contingent artificial scotomas.
Spatial distortions were placed at random locations in 25° square natural scenes. Gaze-contingent artificial central scotomas were updated at the screen rate (75Hz) based on a 250Hz eyetracker. Eight subjects searched the natural scene for the spatial distortion and indicated its location using a mouse-controlled cursor.
As the central scotoma size increased, the mean search time increased [F(3,28)= 5.27, p= .05] and the spatial distribution of gaze points during fixation increased significantly along the x [F(3,28)= 6.33, p= .002] and y [F(3,28)= 3.32, p= .034] axes. Oculomotor patterns of fixation duration, saccade size and saccade duration did not change significantly, regardless of scotoma size.
There is limited automatic adaptation of the oculomotor system following simulated central vision loss.
scotoma; saccade; fixation; eyetracking; visual search
We examined the conversational skills of 2 adult males with severe motor and speech deficits resulting from cerebral palsy. A multiple baseline design across subjects was used to determine the effectiveness of an intervention strategy designed to teach them to use an augmentative communication system (Touch Talker) independently. The dependent measure was the number of conversation initiations relative to conversation reactions during spontaneous communication across baseline and treatment. The treatment included specific training on using the augmentative system to participate in communication. Once the intervention began, the production of conversation initiations accelerated at a rapid rate. The treatment program was effective in training the subjects to use the augmentative system to increase conversation participation. These results demonstrate that training on the operation of the device alone is not sufficient to ensure improvement in conversation performance, and that it is important to incorporate direct conversational treatment when providing instruction on the use of augmentative communication systems for severely speech-impaired individuals.
We analyze the problem of obstacle avoidance from a Bayesian decision-theoretic perspective using an experimental task in which reaches around a virtual obstacle were made toward targets on an upright monitor. Subjects received monetary rewards for touching the target and incurred losses for accidentally touching the intervening obstacle. The locations of target-obstacle pairs within the workspace were varied from trial to trial. We compared human performance to that of a Bayesian ideal movement planner (who chooses motor strategies maximizing expected gain) using the Dominance Test employed in Hudson et al. (2007). The ideal movement planner suffers from the same sources of noise as the human, but selects movement plans that maximize expected gain in the presence of that noise. We find good agreement between the predictions of the model and actual performance in most but not all experimental conditions.
In everyday, cluttered environments, moving to reach or grasp an object can result in unintended collisions with other objects along the path of movement. Depending on what we run into (a priceless Ming vase, a crotchety colleague) we can suffer serious monetary or social consequences. It makes sense to choose movement trajectories that trade off the value of reaching a goal against the consequences of unintended collisions along the way. In the research described here, subjects made speeded movements to touch targets while avoiding obstacles placed along the natural reach trajectory. There were explicit monetary rewards for hitting the target and explicit monetary costs for accidentally hitting the intervening obstacle. We varied the cost and location of the obstacle across conditions. The task was to earn as large a monetary bonus as possible, which required that reaches curve around obstacles only to the extent justified by the location and cost of the obstacle. We compared human performance in this task to that of a Bayesian movement planner who maximized expected gain on each trial. In most conditions, but not all, movement strategies were close to optimal.
The purpose of this article is to present a wide field electrode array that may increase the field of vision in patients implanted with a retinal prosthesis.
Mobility is often impaired in patients with low vision, particularly in those with peripheral visual loss. Studies on low vision patients as well as simulation studies on normally sighted individuals have indicated a strong correlation between the visual field and mobility. In addition, it has been shown that increased visual field is associated with a significant improvement in visual acuity and object discrimination. Current electrode arrays implanted in animals or human vary in size; however, the retinal area covered by the electrodes has a maximum projected visual field of about 10°. We have designed wide field electrode arrays that could potentially provide a visual field of 34°, which may significantly improve the mobility. Tests performed on a mechanical eye model showed that it was possible to fix flexible polyimide dummy electrode arrays of 10 mm wide onto the retina using a single retinal tack. They also showed that the arrays could conform to the inner curvature of the eye. Surgeries on an enucleated porcine eye model demonstrated feasibility of implantation of 10 mm wide arrays through a 5 mm eye wall incision.
Wide field; electrode array; retinal prosthesis; visual field; mobility; retinitis pigmentosa; age related macular degeneration
We tested whether changing accuracy demands for simple pointing movements leads humans to adjust the feedback control laws that map sensory signals from the moving hand to motor commands. Subjects made repeated pointing movements in a virtual environment to touch a button whose shape varied randomly from trial-to-trial – between squares, rectangles oriented perpendicular to the movement path and rectangles oriented parallel to the movement path. Subjects performed the task on a horizontal table, but saw the target configuration and a virtual rendering of their pointing finger through a mirror mounted between a monitor and the table. On a one-third of trials, the position of the virtual finger was perturbed by ±1 cm either in the movement direction or perpendicular to the movement direction when the finger passed behind an occluder. Subjects corrected quickly for the perturbations despite not consciously noticing them; however, they corrected almost twice as much for perturbations aligned with the narrow dimension of a target than for perturbations aligned with the long dimension. These changes in apparent feedback gain appeared in the kinematic trajectories soon after the time of the perturbations, indicating that they reflect differences in the feedback control law used throughout the duration of movements. The results indicate that the brain adjusts its feedback control law for individual movements “on-demand” to fit task demands. Simulations of optimal control laws for a two-joint arm show that accuracy demands alone, coupled with signal dependent noise lead to qualitatively the same behavior.
Feedback; optimal control; motor control; pointing; online control
The Argus II epiretinal prosthesis has been developed to provide partial restoration of vision to subjects blinded from outer retinal degenerative disease. Participants were surgically implanted with the system in the United States and Europe in a single arm, prospective, multicenter clinical trial. The purpose of this investigation was to determine which factors affect electrical thresholds in order to inform surgical placement of the device.
Electrode–retina and electrode–fovea distances were determined using SD-OCT and fundus photography, respectively. Perceptual threshold to electrical stimulation of electrodes was measured using custom developed software, in which current amplitude was varied until the threshold was found. Full field stimulus light threshold was measured using the Espion D-FST test. Relationships between electrical threshold and these three explanatory variables (electrode–retina distance, electrode–fovea distance, and monocular light threshold) were quantified using regression.
Regression analysis showed a significant correlation between electrical threshold and electrode–retina distance (R2 = 0.50, P = 0.0002; n = 703 electrodes). 90.3% of electrodes in contact with the macula (n = 207) elicited percepts at charge densities less than 1 mC/cm2/phase. These threshold data also correlated well with ganglion cell density profile (P = 0.03). A weaker, but still significant, inverse correlation was found between light threshold and electrical threshold (R2 < 0.52, P = 0.01). Multivariate modeling indicated that electrode–retina distance and light threshold are highly predictive of electrode threshold (R2 = 0.87; P < 0.0005).
Taken together, these results suggest that while light threshold should be used to inform patient selection, macular contact of the array is paramount.
Reported Argus II clinical study results are in good agreement with prior in vitro and in vivo studies, and support the development of higher-density systems that employ smaller diameter electrodes. (clinicaltrials.gov identifier: NCT00407602)
retinal prosthesis; retinal degeneration; retinitis pigmentosa
In two experiments, the ability to use multisensory information (haptic information, provided by lightly touching a stationary surface, and vision) for quiet standing was examined in typically developing (TD) children, adults, and in 7-year-old children with Developmental Coordination Disorder (DCD). Four sensory conditions (no touch/no vision, with touch/no vision, no touch/with vision, and with touch/with vision) were employed. In experiment 1, we tested 4-, 6- and 8-year-old TD children and adults to provide a developmental landscape for performance on this task. In experiment 2, we tested a group of 7-year-old children with DCD and their age-matched TD peers. For all groups, touch robustly attenuated standing sway suggesting that children as young as 4 years old use touch information similarly to adults. Touch was less effective in children with DCD compared to their TD peers, especially in attenuating their sway velocity. Children with DCD, unlike their TD peers, also benefited from using vision to reduce sway. The present results suggest that children with DCD benefit from using vision in combination with touch information for standing control possibly due to their less well developed internal models of body orientation and self-motion. Internal model deficits, combined with other known deficits such as postural muscles activation timing deficits, may exacerbate the balance impairment in children with DCD.
Developmental Coordination Disorder (DCD); Standing Balance Multisensory; Light Touch; Vision
Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.
the mechanisms of left unilateral spatial neglect found in the
bisection of lines after cueing to the left end point and to determine
whether neglect occurs for the mental representation of a line.
representational bisection task was developed to eliminate the
influence of the right segment of the physical line that would attract
attention. Eight patients with typical left unilateral spatial neglect
underwent line and representational bisection tasks on a computer
display with a touch panel. In the line bisection with cueing, they
bisected a line after touching the left end point. In the
representational bisection, the patients were presented with a line
until they touched the left end point. On the blank display, they
pointed to the subjective midpoint of the erased line. The performances
of the two bisection tasks were compared when the length and position
of stimulus lines were varied.
errors in the representational bisection were greater than or
equivalent to those in the line bisection with cueing. The effect of
line length in which the errors became greater for the longer lines was
equally found in the line bisection with cueing and the
representational bisection. This was confirmed in the condition where
the right end point was placed at a fixed position and the line length
cueing to the left end point, rightward bisection errors of patients
with neglect are not caused by overattention to the right segment of
the physical line. Left neglect occurs mainly for the mental
representation formed at the time of cueing or seeing the whole extent
of a line.
Functional neuroimaging studies have implicated a number of brain regions, especially the posterior parietal cortex (PPC), as being potentially important for visual–tactile multisensory integration. However, neuroimaging studies are correlational and do not prove the necessity of a region for the behavioral improvements that are the hallmark of multisensory integration. To remedy this knowledge gap, we interrupted activity in the PPC, near the junction of the anterior intraparietal sulcus and the postcentral sulcus, using MRI-guided transcranial magnetic stimulation (TMS) while subjects localized touches delivered to different fingers. As the touches were delivered, subjects viewed a congruent touch video, an incongruent touch video, or no video. Without TMS, a strong effect of multisensory integration was observed, with significantly better behavioral performance for discrimination of congruent multisensory touch than for unisensory touch alone. Incongruent multisensory touch produced a smaller improvement in behavioral performance. TMS of the PPC eliminated the behavioral advantage of both congruent and incongruent multisensory stimuli, reducing performance to unisensory levels. These results demonstrate a causal role for the PPC in visual–tactile multisensory integration. Taken together with converging evidence from other studies, these results support a model in which the PPC contains a map of space around the hand that receives input from both the visual and somatosensory modalities. Activity in this map is likely to be the neural substrate for visual–tactile multisensory integration.
hand; intraparietal sulcus; IPS; somatosensory; vision
• Dissociation between prefrontal cortical and hippocampal contributions to performance of the TUNL task. • Prefrontal cortex lesions result in delay-dependent, but not separation-dependent impairments on TUNL. • Prefrontal cortex lesions result in modest impairments under high interference conditions of TUNL.
The neural structures that support the retention of memories over time has been a subject of intense research in cognitive neuroscience. However, recently much attention has turned to pattern separation, the putative process by which memories are stored as unique representations that are resistant to confusion. It remains unclear, however, to what extent these two processes can be neurally dissociated. The trial-unique delayed nonmatching-to-location (TUNL) task was developed to assess spatial working memory and pattern separation function using trial-unique locations on a touch-sensitive screen (Talpos, McTighe, Dias, Saksida, & Bussey, 2010). Using this task, Talpos et al. (2010) showed that lesions of the hippocampus led to both impairments with a 6 s delay, and impairments in pattern separation. The present study shows that lesions of the medial prefrontal cortex lead to a different pattern of effects: impairment at the same, 6 s delay, but no hint of impairment in pattern separation. In addition, rats with medial prefrontal lesions were more susceptible to interference in this task. When compared with previously published results, these data show that whereas the prefrontal cortex and hippocampus likely interact in the service of working memory across a delay, only the hippocampus and not the medial prefrontal cortex is essential for pattern separation.
Prefrontal cortex; Working memory; Pattern separation; Touchscreen; TUNL; Nonmatch-to-position
Efficient performance in visual detection tasks requires excluding signals from irrelevant spatial locations. Indeed, researchers have found that detection performance in many tasks involving multiple potential target locations can be explained by the uncertainty the added locations contribute to the task. A similar type of Location Uncertainty may arise within the visual system itself. Converging evidence from hyperacuity and crowding studies suggests that feature localization declines rapidly in peripheral vision. This decline should add inherent position uncertainty to detection tasks. The current study used a modified detection task to measure how intrinsic position uncertainty changes with eccentricity. Subjects judged whether a Gabor target appeared within a cued region of a noisy display. The eccentricity and size of the region varied across blocks. When subjects detected the target, they used a mouse to indicate its location. This allowed measurement of localization as well as detection errors. An ideal observer degraded with internal response noise and position noise (uncertainty) accounted for both the detection and localization performance of the subjects. The results suggest that position uncertainty grows linearly with visual eccentricity and is independent of target contrast. Intrinsic position uncertainty appears to be a critical factor limiting search and detection performance.
detection/discrimination; spatial vision; computational modeling; attention; search
Fatigue is an indispensable bioalarm to avoid exhaustive state caused by overwork or stresses. It is necessary to elucidate the neural mechanism of fatigue sensation for managing fatigue properly. We performed H2 15O positron emission tomography scans to indicate neural activations while subjects were performing 35-min fatigue-inducing task trials twice. During the positron emission tomography experiment, subjects performed advanced trail-making tests, touching the target circles in sequence located on the display of a touch-panel screen. In order to identify the brain regions associated with fatigue sensation, correlation analysis was performed using statistical parametric mapping
method. The brain region exhibiting a positive correlation in activity with subjective sensation of fatigue, measured immediately after each positron emission tomography scan, was located in medial orbitofrontal cortex (Brodmann's area 10/11). Hence, the medial orbitofrontal cortex is a brain region associated with mental fatigue sensation. Our findings provide a new perspective on the neural basis of fatigue.
Visual and somatosensory signals participate together in providing an estimate of the hand's spatial location. While the ability of subjects to identify the spatial location of their hand based on visual and proprioceptive signals has previously been characterized, relatively few studies have examined in detail the spatial structure of the proprioceptive map of the arm. Here, we reconstructed and analyzed the spatial structure of the estimation errors that resulted when subjects reported the location of their unseen hand across a 2D horizontal workspace. Hand position estimation was mapped under four conditions: with and without tactile feedback, and with the right and left hands. In the task, we moved each subject's hand to one of 100 targets in the workspace while their eyes were closed. Then, we either a) applied tactile stimulation to the fingertip by allowing the index finger to touch the target or b) as a control, hovered the fingertip 2 cm above the target. After returning the hand to a neutral position, subjects opened their eyes to verbally report where their fingertip had been. We measured and analyzed both the direction and magnitude of the resulting estimation errors. Tactile feedback reduced the magnitude of these estimation errors, but did not change their overall structure. In addition, the spatial structure of these errors was idiosyncratic: each subject had a unique pattern of errors that was stable between hands and over time. Finally, we found that at the population level the magnitude of the estimation errors had a characteristic distribution over the workspace: errors were smallest closer to the body. The stability of estimation errors across conditions and time suggests the brain constructs a proprioceptive map that is reliable, even if it is not necessarily accurate. The idiosyncrasy across subjects emphasizes that each individual constructs a map that is unique to their own experiences.
The authors show that synchronous and asynchronous stimulation on groups of electrodes in subjects with retinal prostheses leads to significant changes in the percept. Understanding how pulse timing across electrodes influences the percept is fundamental to the design of a functional retinal prosthesis.
Vision loss due to retinitis pigmentosa affects an estimated 15 million people worldwide. Through collaboration between Second Sight Medical Products, Inc., and the Doheny Eye Institute, six blind human subjects underwent implantation with epiretinal 4 × 4 electrode arrays designed to directly stimulate the remaining cells of the retina, with the goal of restoring functional vision by applying spatiotemporal patterns of stimulation. To better understand spatiotemporal interactions between electrodes during synchronous and asynchronous stimulation, the authors investigated how percepts changed as a function of pulse timing across the electrodes.
Pulse trains (20, 40, 80, and 160 Hz) were presented on groups of electrodes with 800, 1600, or 2400 μm center-to-center separation. Stimulation was either synchronous (pulses were presented simultaneously across electrodes) or asynchronous (pulses were phase shifted). Using a same-different discrimination task, the authors were able to evaluate how the perceptual quality of the stimuli changed as a function of phase shifts across multiple electrodes.
Even after controlling for electric field interactions, subjects could discriminate between spatiotemporal pulse train patterns based on differences of phase across electrodes as small as 3 ms. These findings suggest that the quality of the percept is affected not only by electric field interactions but also by spatiotemporal interactions at the neural level.
During multielectrode stimulation, interactions between electrodes have a significant influence on the quality of the percept. Understanding how these spatiotemporal interactions at the neural level influence percepts during multielectrode stimulation is fundamental to the successful design of a retinal prosthesis.