The Argus™ II 60 channel epiretinal prosthesis has been developed in order to provide partial restoration of vision to subjects blinded from outer retinal degenerative disease. To date the device has been implanted in 21 subjects as part of a feasibility study. In 6 month post-implantation door finding and line tracking orientation and mobility testing, subjects have shown improvements of 86% and 73%, respectively, for system on vs. system off. In high-contrast Square Localization tests using a touch screen monitor 87% of tested subjects performed significantly better with the system on compared with off. These preliminary results show that the Argus II system provides some functional vision to blind subjects.
We studied the capabilities of the Argus II retinal prosthesis for guiding fine hand movement, and demonstrated and quantified guidance improvement when using the device over when not using the device for progressively less predictable trajectories.
A total of 21 patients with retinitis pigmentosa (RP), remaining vision no more than bare light perception, and an implanted Argus II epiretinal prostheses used a touchscreen to trace white paths on black backgrounds. Sets of paths were divided into three categories: right-angle/single-turn, mixed-angle/single-turn, and mixed-angle/two-turn. Subjects trained on paths by using prosthetic vision and auditory feedback, and then were tested without auditory feedback, with and without prosthetic vision. Custom software recorded position and timing information for any contact that subjects made with the screen. The area between the correct path and the trace, and the elapsed time to trace a path were used to evaluate subject performance.
For right-angle/single-turn sets, average tracing error was reduced by 63% and tracing time increased by 156% when using the prosthesis, relative to residual vision. With mixed-angle/single-turn sets, error was reduced by 53% and time to complete tracing increased by 184%. Prosthesis use decreased error by 38% and increased tracing time by 252% for paths that incorporated two turns.
Use of an epiretinal visual prosthesis can allow RP patients with no more than bare light perception to guide fine hand movement visually. Further, prosthetic input tends to make subjects slower when performing tracing tasks, presumably reflecting greater effort. (ClinicalTrials.gov number, NCT00407602.)
A total of 21 blind retinitis pigmentosa patients used retinal prostheses to visually guide their hands to trace high-contrast paths of varying complexities. Prosthesis use decreased performance error by an average of 60% and increased time to complete the task by an average of 211%. Use of an epiretinal visual prosthesis can allow RP patients with no more than bare light perception to visually guide fine hand movement. Further, prosthetic input tends to make subjects slower when performing tracing tasks, presumably reflecting greater effort.
Retinal prostheses, which restore partial vision to patients blinded by outer retinal degeneration, are currently in clinical trial. The Argus II retinal prosthesis system was recently awarded CE approval for commercial use in Europe. While retinal prosthesis users have achieved remarkable visual improvement to the point of reading letters and short sentences, the reading process is still fairly cumbersome. This study investigates the possibility of using an epiretinal prosthesis to stimulate visual braille as a sensory substitution for reading written letters and words. The Argus II retinal prosthesis system, used in this study, includes a 10 × 6 electrode array implanted epiretinally, a tiny video camera mounted on a pair of glasses, and a wearable computer that processes the video and determines the stimulation current of each electrode in real time. In the braille reading system, individual letters are created by a subset of dots from a 3 by 2 array of six dots. For the visual braille experiment, a grid of six electrodes was chosen out of the 10 × 6 Argus II array. Groups of these electrodes were then directly stimulated (bypassing the camera) to create visual percepts of individual braille letters. Experiments were performed in a single subject. Single letters were stimulated in an alternative forced choice (AFC) paradigm, and short 2–4-letter words were stimulated (one letter at a time) in an open-choice reading paradigm. The subject correctly identified 89% of single letters, 80% of 2-letter, 60% of 3-letter, and 70% of 4-letter words. This work suggests that text can successfully be stimulated and read as visual braille in retinal prosthesis patients.
retina; epiretinal prosthesis; sensory substitution; retinitis pigmentosa; blindness; perception; degeneration; sight restoration
It has been theorized that sensorimotor processing deficits underlie Parkinson’s disease (PD) motor impairments including movement under proprioceptive control. However, it is possible that these sensorimotor processing deficits exclude tactile/proprioception sensorimotor integration: prior studies show improved movement accuracy in PD with endpoint tactile feedback, and good control in tactile-driven precision-grip tasks.
To determine whether tactile/proprioceptive integration in particular is affected by PD, nine subjects with PD (off-medication, UPDRS motor=19-42) performed an arm-matching task without visual feedback. In some trials one arm touched a static tactile cue that conflicted with dynamic proprioceptive feedback from biceps brachii muscle vibration. This sensory conflict paradigm has characterized tactile/proprioceptive integration in healthy subjects as specific to the context of tactile cue mobility assumptions and the intention to move the arm.
We found that the individuals with PD had poorer arm-matching acuracy than age-matched control subjects. However, PD-group accuracy improved with tactile feedback. Furthermore, sensory conflict conditions were resolved in the same context-dependent fashion by both subject groups. We conclude that the somatosensory integration mechanism for prioritizing tactile and proprioception feedback in this task are not disrupted by PD, and are not related to the observed proprioceptive deficits.
Parkinson’s disease; proprioception; touch; sensory integration
To restore functional form vision, epiretinal prostheses have been implanted in blind human subjects to electrically elicit percepts. These findings suggest that frequency modulation may be the best way to produce percepts that range widely in brightness while minimizing loss of spatial resolution.
In an effort to restore functional form vision, epiretinal prostheses that elicit percepts by directly stimulating remaining retinal circuitry were implanted in human subjects with advanced retinitis pigmentosa RP). In this study, manipulating pulse train frequency and amplitude had different effects on the size and brightness of phosphene appearance.
Experiments were performed on a single subject with severe RP (implanted with a 16-channel epiretinal prosthesis in 2004) on nine individual electrodes. Psychophysical techniques were used to measure both the brightness and size of phosphenes when the biphasic pulse train was varied by either modulating the current amplitude (with constant frequency) or the stimulating frequency (with constant current amplitude).
Increasing stimulation frequency always increased brightness, while having a smaller effect on the size of elicited phosphenes. In contrast, increasing stimulation amplitude generally increased both the size and brightness of phosphenes. These experimental findings can be explained by using a simple computational model based on previous psychophysical work and the expected spatial spread of current from a disc electrode.
Given that amplitude and frequency have separable effects on percept size, these findings suggest that frequency modulation improves the encoding of a wide range of brightness levels without a loss of spatial resolution. Future retinal prosthesis designs could benefit from having the flexibility to manipulate pulse train amplitude and frequency independently (clinicaltrials.gov number, NCT00279500).
Action requires knowledge of our body location in space. Here we asked if interactions with the external world prior to a reaching action influence how visual location information is used. We investigated if the temporal synchrony between viewing and feeling touch modulates the integration of visual and proprioceptive body location information for action. We manipulated the synchrony between viewing and feeling touch in the Rubber Hand Illusion paradigm prior to participants performing a ballistic reaching task to a visually specified target. When synchronous touch was given, reaching trajectories were significantly shifted compared to asynchronous touch. The direction of this shift suggests that touch influences the encoding of hand position for action. On the basis of this data and previous findings, we propose that the brain uses correlated cues from passive touch and vision to update its own position for action and experience of self-location.
Parietal Cortex; Touch; Action; Human Body; Body Location; Rubber Hand Illusion
The purpose of this article is to present a wide field electrode array that may increase the field of vision in patients implanted with a retinal prosthesis.
Mobility is often impaired in patients with low vision, particularly in those with peripheral visual loss. Studies on low vision patients as well as simulation studies on normally sighted individuals have indicated a strong correlation between the visual field and mobility. In addition, it has been shown that increased visual field is associated with a significant improvement in visual acuity and object discrimination. Current electrode arrays implanted in animals or human vary in size; however, the retinal area covered by the electrodes has a maximum projected visual field of about 10°. We have designed wide field electrode arrays that could potentially provide a visual field of 34°, which may significantly improve the mobility. Tests performed on a mechanical eye model showed that it was possible to fix flexible polyimide dummy electrode arrays of 10 mm wide onto the retina using a single retinal tack. They also showed that the arrays could conform to the inner curvature of the eye. Surgeries on an enucleated porcine eye model demonstrated feasibility of implantation of 10 mm wide arrays through a 5 mm eye wall incision.
Wide field; electrode array; retinal prosthesis; visual field; mobility; retinitis pigmentosa; age related macular degeneration
We analyze the problem of obstacle avoidance from a Bayesian decision-theoretic perspective using an experimental task in which reaches around a virtual obstacle were made toward targets on an upright monitor. Subjects received monetary rewards for touching the target and incurred losses for accidentally touching the intervening obstacle. The locations of target-obstacle pairs within the workspace were varied from trial to trial. We compared human performance to that of a Bayesian ideal movement planner (who chooses motor strategies maximizing expected gain) using the Dominance Test employed in Hudson et al. (2007). The ideal movement planner suffers from the same sources of noise as the human, but selects movement plans that maximize expected gain in the presence of that noise. We find good agreement between the predictions of the model and actual performance in most but not all experimental conditions.
In everyday, cluttered environments, moving to reach or grasp an object can result in unintended collisions with other objects along the path of movement. Depending on what we run into (a priceless Ming vase, a crotchety colleague) we can suffer serious monetary or social consequences. It makes sense to choose movement trajectories that trade off the value of reaching a goal against the consequences of unintended collisions along the way. In the research described here, subjects made speeded movements to touch targets while avoiding obstacles placed along the natural reach trajectory. There were explicit monetary rewards for hitting the target and explicit monetary costs for accidentally hitting the intervening obstacle. We varied the cost and location of the obstacle across conditions. The task was to earn as large a monetary bonus as possible, which required that reaches curve around obstacles only to the extent justified by the location and cost of the obstacle. We compared human performance in this task to that of a Bayesian movement planner who maximized expected gain on each trial. In most conditions, but not all, movement strategies were close to optimal.
We tested whether changing accuracy demands for simple pointing movements leads humans to adjust the feedback control laws that map sensory signals from the moving hand to motor commands. Subjects made repeated pointing movements in a virtual environment to touch a button whose shape varied randomly from trial-to-trial – between squares, rectangles oriented perpendicular to the movement path and rectangles oriented parallel to the movement path. Subjects performed the task on a horizontal table, but saw the target configuration and a virtual rendering of their pointing finger through a mirror mounted between a monitor and the table. On a one-third of trials, the position of the virtual finger was perturbed by ±1 cm either in the movement direction or perpendicular to the movement direction when the finger passed behind an occluder. Subjects corrected quickly for the perturbations despite not consciously noticing them; however, they corrected almost twice as much for perturbations aligned with the narrow dimension of a target than for perturbations aligned with the long dimension. These changes in apparent feedback gain appeared in the kinematic trajectories soon after the time of the perturbations, indicating that they reflect differences in the feedback control law used throughout the duration of movements. The results indicate that the brain adjusts its feedback control law for individual movements “on-demand” to fit task demands. Simulations of optimal control laws for a two-joint arm show that accuracy demands alone, coupled with signal dependent noise lead to qualitatively the same behavior.
Feedback; optimal control; motor control; pointing; online control
In two experiments, the ability to use multisensory information (haptic information, provided by lightly touching a stationary surface, and vision) for quiet standing was examined in typically developing (TD) children, adults, and in 7-year-old children with Developmental Coordination Disorder (DCD). Four sensory conditions (no touch/no vision, with touch/no vision, no touch/with vision, and with touch/with vision) were employed. In experiment 1, we tested 4-, 6- and 8-year-old TD children and adults to provide a developmental landscape for performance on this task. In experiment 2, we tested a group of 7-year-old children with DCD and their age-matched TD peers. For all groups, touch robustly attenuated standing sway suggesting that children as young as 4 years old use touch information similarly to adults. Touch was less effective in children with DCD compared to their TD peers, especially in attenuating their sway velocity. Children with DCD, unlike their TD peers, also benefited from using vision to reduce sway. The present results suggest that children with DCD benefit from using vision in combination with touch information for standing control possibly due to their less well developed internal models of body orientation and self-motion. Internal model deficits, combined with other known deficits such as postural muscles activation timing deficits, may exacerbate the balance impairment in children with DCD.
Developmental Coordination Disorder (DCD); Standing Balance Multisensory; Light Touch; Vision
the mechanisms of left unilateral spatial neglect found in the
bisection of lines after cueing to the left end point and to determine
whether neglect occurs for the mental representation of a line.
representational bisection task was developed to eliminate the
influence of the right segment of the physical line that would attract
attention. Eight patients with typical left unilateral spatial neglect
underwent line and representational bisection tasks on a computer
display with a touch panel. In the line bisection with cueing, they
bisected a line after touching the left end point. In the
representational bisection, the patients were presented with a line
until they touched the left end point. On the blank display, they
pointed to the subjective midpoint of the erased line. The performances
of the two bisection tasks were compared when the length and position
of stimulus lines were varied.
errors in the representational bisection were greater than or
equivalent to those in the line bisection with cueing. The effect of
line length in which the errors became greater for the longer lines was
equally found in the line bisection with cueing and the
representational bisection. This was confirmed in the condition where
the right end point was placed at a fixed position and the line length
cueing to the left end point, rightward bisection errors of patients
with neglect are not caused by overattention to the right segment of
the physical line. Left neglect occurs mainly for the mental
representation formed at the time of cueing or seeing the whole extent
of a line.
Functional neuroimaging studies have implicated a number of brain regions, especially the posterior parietal cortex (PPC), as being potentially important for visual–tactile multisensory integration. However, neuroimaging studies are correlational and do not prove the necessity of a region for the behavioral improvements that are the hallmark of multisensory integration. To remedy this knowledge gap, we interrupted activity in the PPC, near the junction of the anterior intraparietal sulcus and the postcentral sulcus, using MRI-guided transcranial magnetic stimulation (TMS) while subjects localized touches delivered to different fingers. As the touches were delivered, subjects viewed a congruent touch video, an incongruent touch video, or no video. Without TMS, a strong effect of multisensory integration was observed, with significantly better behavioral performance for discrimination of congruent multisensory touch than for unisensory touch alone. Incongruent multisensory touch produced a smaller improvement in behavioral performance. TMS of the PPC eliminated the behavioral advantage of both congruent and incongruent multisensory stimuli, reducing performance to unisensory levels. These results demonstrate a causal role for the PPC in visual–tactile multisensory integration. Taken together with converging evidence from other studies, these results support a model in which the PPC contains a map of space around the hand that receives input from both the visual and somatosensory modalities. Activity in this map is likely to be the neural substrate for visual–tactile multisensory integration.
hand; intraparietal sulcus; IPS; somatosensory; vision
Fatigue is an indispensable bioalarm to avoid exhaustive state caused by overwork or stresses. It is necessary to elucidate the neural mechanism of fatigue sensation for managing fatigue properly. We performed H2 15O positron emission tomography scans to indicate neural activations while subjects were performing 35-min fatigue-inducing task trials twice. During the positron emission tomography experiment, subjects performed advanced trail-making tests, touching the target circles in sequence located on the display of a touch-panel screen. In order to identify the brain regions associated with fatigue sensation, correlation analysis was performed using statistical parametric mapping
method. The brain region exhibiting a positive correlation in activity with subjective sensation of fatigue, measured immediately after each positron emission tomography scan, was located in medial orbitofrontal cortex (Brodmann's area 10/11). Hence, the medial orbitofrontal cortex is a brain region associated with mental fatigue sensation. Our findings provide a new perspective on the neural basis of fatigue.
The authors show that synchronous and asynchronous stimulation on groups of electrodes in subjects with retinal prostheses leads to significant changes in the percept. Understanding how pulse timing across electrodes influences the percept is fundamental to the design of a functional retinal prosthesis.
Vision loss due to retinitis pigmentosa affects an estimated 15 million people worldwide. Through collaboration between Second Sight Medical Products, Inc., and the Doheny Eye Institute, six blind human subjects underwent implantation with epiretinal 4 × 4 electrode arrays designed to directly stimulate the remaining cells of the retina, with the goal of restoring functional vision by applying spatiotemporal patterns of stimulation. To better understand spatiotemporal interactions between electrodes during synchronous and asynchronous stimulation, the authors investigated how percepts changed as a function of pulse timing across the electrodes.
Pulse trains (20, 40, 80, and 160 Hz) were presented on groups of electrodes with 800, 1600, or 2400 μm center-to-center separation. Stimulation was either synchronous (pulses were presented simultaneously across electrodes) or asynchronous (pulses were phase shifted). Using a same-different discrimination task, the authors were able to evaluate how the perceptual quality of the stimuli changed as a function of phase shifts across multiple electrodes.
Even after controlling for electric field interactions, subjects could discriminate between spatiotemporal pulse train patterns based on differences of phase across electrodes as small as 3 ms. These findings suggest that the quality of the percept is affected not only by electric field interactions but also by spatiotemporal interactions at the neural level.
During multielectrode stimulation, interactions between electrodes have a significant influence on the quality of the percept. Understanding how these spatiotemporal interactions at the neural level influence percepts during multielectrode stimulation is fundamental to the successful design of a retinal prosthesis.
Visual and somatosensory signals participate together in providing an estimate of the hand's spatial location. While the ability of subjects to identify the spatial location of their hand based on visual and proprioceptive signals has previously been characterized, relatively few studies have examined in detail the spatial structure of the proprioceptive map of the arm. Here, we reconstructed and analyzed the spatial structure of the estimation errors that resulted when subjects reported the location of their unseen hand across a 2D horizontal workspace. Hand position estimation was mapped under four conditions: with and without tactile feedback, and with the right and left hands. In the task, we moved each subject's hand to one of 100 targets in the workspace while their eyes were closed. Then, we either a) applied tactile stimulation to the fingertip by allowing the index finger to touch the target or b) as a control, hovered the fingertip 2 cm above the target. After returning the hand to a neutral position, subjects opened their eyes to verbally report where their fingertip had been. We measured and analyzed both the direction and magnitude of the resulting estimation errors. Tactile feedback reduced the magnitude of these estimation errors, but did not change their overall structure. In addition, the spatial structure of these errors was idiosyncratic: each subject had a unique pattern of errors that was stable between hands and over time. Finally, we found that at the population level the magnitude of the estimation errors had a characteristic distribution over the workspace: errors were smallest closer to the body. The stability of estimation errors across conditions and time suggests the brain constructs a proprioceptive map that is reliable, even if it is not necessarily accurate. The idiosyncrasy across subjects emphasizes that each individual constructs a map that is unique to their own experiences.
We examined the conversational skills of 2 adult males with severe motor and speech deficits resulting from cerebral palsy. A multiple baseline design across subjects was used to determine the effectiveness of an intervention strategy designed to teach them to use an augmentative communication system (Touch Talker) independently. The dependent measure was the number of conversation initiations relative to conversation reactions during spontaneous communication across baseline and treatment. The treatment included specific training on using the augmentative system to participate in communication. Once the intervention began, the production of conversation initiations accelerated at a rapid rate. The treatment program was effective in training the subjects to use the augmentative system to increase conversation participation. These results demonstrate that training on the operation of the device alone is not sufficient to ensure improvement in conversation performance, and that it is important to incorporate direct conversational treatment when providing instruction on the use of augmentative communication systems for severely speech-impaired individuals.
Recent theoretical advances on the topic of body representations have raised the question whether spatial perception of touch and nociception involve the same representations. Various authors have established that subjective localizations of touch and nociception are displaced in a systematic manner. The relation between veridical stimulus locations and localizations can be described in the form of a perceptual map; these maps differ between subjects. Recently, evidence was found for a common set of body representations to underlie spatial perception of touch and slow and fast pain, which receive information from modality specific primary representations. There are neurophysiological clues that the various cutaneous senses may not share the same primary representation. If this is the case, then differences in primary representations between touch and nociception may cause subject-dependent differences in perceptual maps of these modalities. We studied localization of tactile and nociceptive sensations on the forearm using electrocutaneous stimulation. The perceptual maps of these modalities differed at the group level. When assessed for individual subjects, the differences localization varied in nature between subjects. The agreement of perceptual maps of the two modalities was moderate. These findings are consistent with a common internal body representation underlying spatial perception of touch and nociception. The subject level differences suggest that in addition to these representations other aspects, possibly differences in primary representation and/or the influence of stimulus parameters, lead to differences in perceptual maps in individuals.
perceptual map; touch; nociception; electrocutaneous stimulation; localization; body representations; primary representations
We are studying the effectiveness of a semicircular canal prosthesis to improve postural control, perception of spatial orientation, and the VOR in rhesus monkeys with bilateral vestibular hypofunction. Balance is examined by measuring spontaneous sway of the body during quiet stance and postural responses evoked by head turns and rotation of the support surface; perception is measured with a task derived from the subjective visual vertical (SVV) test during static and dynamic rotation in the roll plane; and the angular VOR is measured during rotation about the roll, pitch, and yaw axes. After the normal responses are characterized, bilateral vestibular loss is induced with intratympanic gentamicin, and then multisite stimulating electrodes are chronically implanted into the ampullae of all three canals in one ear. The postural, perceptual, and VOR responses are then characterized in the ablated state, and then bilateral, chronic electrical stimulation is applied to the ampullary nerves using a prosthesis that senses angular head velocity in three-dimensions and uses this information to modulate the rate of current pulses provided by the implanted electrodes. We are currently characterizing two normal monkeys with these paradigms, and vestibular ablation and electrode implantation are planned for the near future. In one prior rhesus monkey tested with this approach, we found that a one-dimensional (posterior canal) prosthesis improved balance during head turns, perceived head orientation during roll tilts, and the VOR in the plane of the instrumented canal. We therefore predict that the more complete information provided by a three-dimensional prosthesis that modulates activity in bilaterally-paired canals will exceed the benefits provided by the one-dimensional, unilateral approach used in our preliminary studies.
We describe how upper limb amputees can be made to experience a rubber hand as part of their own body. This was accomplished by applying synchronous touches to the stump, which was out of view, and to the index finger of a rubber hand, placed in full view (26 cm medial to the stump). This elicited an illusion of sensing touch on the artificial hand, rather than on the stump and a feeling of ownership of the rubber hand developed. This effect was supported by quantitative subjective reports in the form of questionnaires, behavioural data in the form of misreaching in a pointing task when asked to localize the position of the touch, and physiological evidence obtained by skin conductance responses when threatening the hand prosthesis. Our findings outline a simple method for transferring tactile sensations from the stump to a prosthetic limb by tricking the brain, thereby making an important contribution to the field of neuroprosthetics where a major goal is to develop artificial limbs that feel like a real parts of the body.
limb ownership; prosthetics; body representation; plasticity; illusion; referred sensation
To assess the possible effects of retinal prosthesisimplant location on the initiation and stability of pursuit eye movements.
Six normally sighted subjects visually tracked a horizontally moving target in natural vision and in simulated prosthetic vision. Subjects were instructed to press a key when the target jumped. Prosthetic vision was simulated with a 10 × 10 array of 1° diameter phosphenes. Three implant locations in the retina were simulated: macular, 8° superior, and 8° nasal. Target motion had two speeds: 4°/s and 8°/s. Eye movement latency, horizontal stability, and vertical stability were assessed. Key-press behaviors responding to target jump were analyzed to evaluate functional eye movements.
Compared with natural vision, horizontal eye position with respect to target position was less stable in simulated prosthetic vision at macular, superior, and nasal implant locations, in ascending order of the degree of instability. Vertical eye position with respect to target position in simulated prosthetic vision with the superior implant location was less stable in tracking slow target motion than fast. Eye movement latency in simulated prosthetic vision was longer than in natural vision. Key-press performance was impaired in simulated prosthetic vision.
Pursuit eye movements in prosthetic vision, compared to natural vision, are significantly slower in initiation and less smooth in motion. They seem, however, still functional, even if the prosthesis is implanted in the peripheral retina. A superior implant locus may help the prosthesis wearer better control horizontal eye movements, which are more frequently used in the activities of daily living.
Previous evidence points to a causal link between playing action video games and enhanced cognition and perception. However, benefits of playing other video games are under-investigated. We examined whether playing non-action games also improves cognition. Hence, we compared transfer effects of an action and other non-action types that required different cognitive demands.
We instructed 5 groups of non-gamer participants to play one game each on a mobile device (iPhone/iPod Touch) for one hour a day/five days a week over four weeks (20 hours). Games included action, spatial memory, match-3, hidden- object, and an agent-based life simulation. Participants performed four behavioral tasks before and after video game training to assess for transfer effects. Tasks included an attentional blink task, a spatial memory and visual search dual task, a visual filter memory task to assess for multiple object tracking and cognitive control, as well as a complex verbal span task. Action game playing eliminated attentional blink and improved cognitive control and multiple-object tracking. Match-3, spatial memory and hidden object games improved visual search performance while the latter two also improved spatial working memory. Complex verbal span improved after match-3 and action game training.
Cognitive improvements were not limited to action game training alone and different games enhanced different aspects of cognition. We conclude that training specific cognitive abilities frequently in a video game improves performance in tasks that share common underlying demands. Overall, these results suggest that many video game-related cognitive improvements may not be due to training of general broad cognitive systems such as executive attentional control, but instead due to frequent utilization of specific cognitive processes during game play. Thus, many video game training related improvements to cognition may be attributed to near-transfer effects.
Feeling touch on a body part is paradigmatically considered to require stimulation of tactile afferents from the body part in question, at least in healthy non-synaesthetic individuals. In contrast to this view, we report a perceptual illusion where people experience “phantom touches” on a right rubber hand when they see it brushed simultaneously with brushes applied to their left hand. Such illusory duplication and transfer of touch from the left to the right hand was only elicited when a homologous (i.e., left and right) pair of hands was brushed in synchrony for an extended period of time. This stimulation caused the majority of our participants to perceive the right rubber hand as their own and to sense two distinct touches – one located on the right rubber hand and the other on their left (stimulated) hand. This effect was supported by quantitative subjective reports in the form of questionnaires, behavioral data from a task in which participants pointed to the felt location of their right hand, and physiological evidence obtained by skin conductance responses when threatening the model hand. Our findings suggest that visual information augments subthreshold somatosensory responses in the ipsilateral hemisphere, thus producing a tactile experience from the non-stimulated body part. This finding is important because it reveals a new bilateral multisensory mechanism for tactile perception and limb ownership.
Background. This cross-sectional study examined the effect of aging on performing finger-pointing tasks involving choices and whether experienced older Tai Chi practitioners perform better than healthy older controls in such tasks. Methods. Thirty students and 30 healthy older controls were compared with 31 Tai Chi practitioners. All the subjects performed a rapid index finger-pointing task. The visual signal appeared randomly under 3 conditions: (1) to touch a black ball as quickly and as accurately as possible, (2) not to touch a white ball, (3) to touch only the white ball when a black and a white ball appeared simultaneously. Reaction time (RT) of anterior deltoid electromyogram, movement time (MT) from electromyogram onset to touching of the target, end-point accuracy from the center of the target, and the number of wrong movements were recorded. Results. Young students displayed significantly faster RT and MT, achieving significantly greater end-point accuracy and fewer wrong movements than older controls. Older Tai Chi practitioners had significantly faster MT than older controls. Conclusion. Finger-pointing tasks with a choice paradigm became slower and less accurate with age. Positive findings suggest that Tai Chi may slow down the aging effect on eye-hand coordination tasks involving choices that require more cognitive progressing.
Neuroplasticity underlies the brain's ability to alter perception and behavior through training, practice, or simply exposure to sensory stimulation. Improvement of tactile discrimination has been repeatedly demonstrated after repetitive sensory stimulation (rSS) of the fingers; however, it remains unknown if such protocols also affect hand dexterity or pain thresholds. We therefore stimulated the thumb and index finger of young adults to investigate, besides testing tactile discrimination, the impact of rSS on dexterity, pain, and touch thresholds. We observed an improvement in the pegboard task where subjects used the thumb and index finger only. Accordingly, stimulating 2 fingers simultaneously potentiates the efficacy of rSS. In fact, we observed a higher gain of discrimination performance as compared to a single-finger rSS. In contrast, pain and touch thresholds remained unaffected. Our data suggest that selecting particular fingers modulates the efficacy of rSS, thereby affecting processes controlling sensorimotor integration.
Previous research suggests that visual and haptic object recognition are viewpoint-dependent both within- and cross-modally. However, this conclusion may not be generally valid as it was reached using objects oriented along their extended y-axis, resulting in differential surface processing in vision and touch. In the present study, we removed this differential by presenting objects along the z-axis, thus making all object surfaces more equally available to vision and touch.
Participants studied previously unfamiliar objects, in groups of four, using either vision or touch. Subsequently, they performed a four-alternative forced-choice object identification task with the studied objects presented in both unrotated and rotated (180° about the x-, y-, and z-axes) orientations. Rotation impaired within-modal recognition accuracy in both vision and touch, but not cross-modal recognition accuracy. Within-modally, visual recognition accuracy was reduced by rotation about the x- and y-axes more than the z-axis, whilst haptic recognition was equally affected by rotation about all three axes. Cross-modal (but not within-modal) accuracy correlated with spatial (but not object) imagery scores.
The viewpoint-independence of cross-modal object identification points to its mediation by a high-level abstract representation. The correlation between spatial imagery scores and cross-modal performance suggest that construction of this high-level representation is linked to the ability to perform spatial transformations. Within-modal viewpoint-dependence appears to have a different basis in vision than in touch, possibly due to surface occlusion being important in vision but not touch.