Transcranial alternating current stimulation (tACS) is used in clinical applications and basic neuroscience research. Although its behavioral effects are evident from prior reports, current understanding of the mechanisms that underlie these effects is limited. We used motion perception, a percept with relatively well known properties and underlying neural mechanisms to investigate tACS mechanisms. Healthy human volunteers showed a surprising improvement in motion sensitivity when visual stimuli were paired with 10 Hz tACS. In addition, tACS reduced the motion-after effect, and this reduction was correlated with the improvement in motion sensitivity. Electrical stimulation had no consistent effect when applied before presenting a visual stimulus or during recovery from motion adaptation. Together, these findings suggest that perceptual effects of tACS result from an attenuation of adaptation. Important consequences for the practical use of tACS follow from our work. First, because this mechanism interferes only with adaptation, this suggests that tACS can be targeted at subsets of neurons (by adapting them), even when the applied currents spread widely throughout the brain. Second, by interfering with adaptation, this mechanism provides a means by which electrical stimulation can generate behavioral effects that outlast the stimulation.
discrimination sensitivity; mechanisms; motion adaptation; motion after effect; transcranial alternating current stimulation
Visual localization is based on the complex interplay of bottom-up and top-down processing. Based on previous work, the posterior parietal cortex (PPC) is assumed to play an essential role in this interplay. In this study, we investigated the causal role of the PPC in visual localization. Specifically, our goal was to determine whether modulation of the PPC via transcranial direct current stimulation (tDCS) could induce visual mislocalization similar to that induced by an exogenous attentional cue (Wright, Morris, & Krekelberg, 2011). We placed one stimulation electrode over the right PPC and the other over the left PPC (dual tDCS) and varied the polarity of the stimulation. We found that this manipulation altered visual localization; this supports the causal involvement of the PPC in visual localization. Notably, mislocalization was more rightward when the cathode was placed over the right PPC than when the anode was placed over the right PPC. This mislocalization was found within a few minutes of stimulation onset, it dissipated during stimulation, but then resurfaced after stimulation offset and lasted for another 10–15 min. On the assumption that excitability is reduced beneath the cathode and increased beneath the anode, these findings support the view that each hemisphere biases processing to the contralateral hemifield and that the balance of activation between the hemispheres contributes to position perception (Kinsbourne, 1977; Szczepanski, Konen, & Kastner, 2010).
visual localization; spatial attention; transcranial electrical stimulation; interhemispheric competition; position perception
Eye-position signals (EPS) are found throughout the primate visual system and are thought to provide a mechanism for representing spatial locations in a manner that is robust to changes in eye position. It remains unknown, however, whether cortical EPS (also known as “gain fields”) have the necessary spatial and temporal characteristics to fulfill their purported computational roles. To quantify these EPS, we combined single-unit recordings in four dorsal visual areas of behaving rhesus macaques (lateral intraparietal area, ventral intraparietal area, middle temporal area, and the medial superior temporal area) with likelihood-based population-decoding techniques. The decoders used knowledge of spiking statistics to estimate eye position during fixation from a set of observed spike counts across neurons. Importantly, these samples were short in duration (100 ms) and from individual trials to mimic the real-time estimation problem faced by the brain. The results suggest that cortical EPS provide an accurate and precise representation of eye position, albeit with unequal signal fidelity across brain areas and a modest underestimation of eye eccentricity. The underestimation of eye eccentricity predicted a pattern of mislocalization that matches the errors made by human observers. In addition, we found that eccentric eye positions were associated with enhanced precision relative to the primary eye position. This predicts that positions in visual space should be represented more reliably during eccentric gaze than while looking straight ahead. Together, these results suggest that cortical eye-position signals provide a useable head-centered representation of visual space on timescales that are compatible with the duration of a typical ocular fixation.
The detection of visual motion requires temporal delays to compare current with earlier visual input. Models of motion detection assume that these delays reside in separate classes of slow and fast thalamic cells, or slow and fast synaptic transmission. We used a data-driven modeling approach to generate a model that instead uses recurrent network dynamics with a single, fixed temporal integration window to implement the velocity computation. This model successfully reproduced the temporal response dynamics of a population of motion sensitive neurons in macaque middle temporal area (MT) and its constituent parts matched many of the properties found in the motion processing pathway (e.g., Gabor-like receptive fields (RFs), simple and complex cells, spatially asymmetric excitation and inhibition). Reverse correlation analysis revealed that a simplified network based on first and second order space-time correlations of the recurrent model behaved much like a feedforward motion energy (ME) model. The feedforward model, however, failed to capture the full speed tuning and direction selectivity properties based on higher than second order space-time correlations typically found in MT. These findings support the idea that recurrent network connectivity can create temporal delays to compute velocity. Moreover, the model explains why the motion detection system often behaves like a feedforward ME network, even though the anatomical evidence strongly suggests that this network should be dominated by recurrent feedback.
motion; model; recurrent connections; time; middle temporal area
The goal of multi-voxel pattern analysis (MVPA) in BOLD imaging is to determine whether patterns of activation across multiple voxels change with experimental conditions. MVPA is a powerful technique, its use is rapidly growing, but it poses serious statistical challenges. For instance, it is well-known that the slow nature of the BOLD response can lead to greatly exaggerated performance estimates. Methods are available to avoid this overestimation, and we present those here in tutorial fashion. We go on to show that, even with these methods, standard tests of significance such as Students’ T and the binomial tests are invalid in typical MRI experiments. Only a carefully constructed permutation test correctly assesses statistical significance. Furthermore, our simulations show that performance estimates increase with both temporal as well as spatial signal correlations among multiple voxels. This dependence implies that a comparison of MVPA performance between areas, between subjects, or even between BOLD signals that have been preprocessed in different ways needs great care.
Human vision requires fast eye movements (saccades). Each saccade causes a self-induced motion signal, but we are not aware of this potentially jarring visual input. Among the theorized causes of this phenomenon is a decrease in visual sensitivity before (presaccadic suppression) and during (intrasaccadic suppression) saccades. We investigated intrasaccadic suppression using a perceptual template model (PTM) relating visual detection to different signal-processing stages. One stage changes the gain on the detector's input; another increases uncertainty about the stimulus, allowing more noise into the detector; and other stages inject noise into the detector in a stimulus-dependent or -independent manner. By quantifying intrasaccadic suppression of flashed horizontal gratings at varying external noise levels, we obtained threshold-versus-noise (TVN) data, allowing us to fit the PTM. We tested if any of the PTM parameters changed significantly between the fixation and saccade models and could therefore account for intrasaccadic suppression. We found that the dominant contribution to intrasaccadic suppression was a reduction in the gain of the visual detector. We discuss how our study differs from previous ones that have pointed to uncertainty as an underlying cause of intrasaccadic suppression and how the equivalent noise approach provides a framework for comparing the disparate neural correlates of saccadic suppression.
saccadic suppression; perceptual template model; equivalent noise; eye movements; noise injection; gain reduction; spatial uncertainty
Many visual areas of the primate brain contain signals related to the current position of the eyes in the orbit. These cortical eye-position signals are thought to underlie the transformation of retinal input – which changes with every eye movement – into a stable representation of visual space. For this coding scheme to work, such signals would need to be updated fast enough to keep up with the eye during normal exploratory behavior. We examined the dynamics of cortical eye-position signals in four dorsal visual areas of the macaque brain: the lateral and ventral intraparietal areas (LIP; VIP), the middle temporal area (MT), and the medial-superior temporal area (MST). We recorded extracellular activity of single neurons while the animal performed sequences of fixations and saccades in darkness.
The data show that eye-position signals are updated predictively, such that the representation shifts in the direction of a saccade prior to (<100ms) the actual eye movement. Despite this early start, eye-position signals remain inaccurate until shortly (10–150ms) after the eye movement. Using simulated behavioral experiments, we show that this brief misrepresentation of eye position provides a neural explanation for the psychophysical phenomenon of ‘perisaccadic mislocalization’, in which observers misperceive the positions of visual targets flashed around the time of saccadic eye movements.
Together, these results suggest that eye-position signals in the dorsal visual system are updated rapidly across eye movements and play a direct role in perceptual localization, even when they are erroneous.
Neurons in the middle temporal area (MT) are often viewed as motion detectors that prefer a single direction of motion in a single region of space. This assumption plays an important role in our understanding of visual processing, and models of motion processing in particular. We used extracellular recordings in area MT of awake, behaving monkeys (M. mulatta) to test this assumption with a novel reverse correlation approach. Nearly half of the MT neurons in our sample deviated significantly from the classical view. First, in many cells, direction preference changed with the location of the stimulus within the receptive field. Second, the spatial response profile often had multiple peaks with apparent gaps in between. This shows that visual motion analysis in MT has access to motion detectors that are more complex than commonly thought. This complexity could be a mere byproduct of imperfect development, but can also be understood as the natural consequence of the non-linear, recurrent interactions among laterally connected MT neurons. An important direction for future research is to investigate whether these in homogeneities are advantageous, how they can be incorporated into models of motion detection, and whether they can provide quantitative insight into the underlying effective connectivity.
receptive field; motion; middle temporal area; direction tuning; visual perception
We use saccades several times per second to move the fovea between points of interest and build an understanding of our visual environment. Recent behavioral experiments show evidence for the integration of pre- and postsaccadic information (even subliminally), the modulation of visual sensitivity, and the rapid reallocation of attention. The recent physiological literature has identified a characteristic modulation of neural responsiveness - perisaccadic reduction followed by a postsaccadic increase that is found in many visual areas, but whose source is as yet unknown. This modulation seems optimal for reducing sensitivity during and boosting sensitivity between saccades, but no study has yet established a direct causal link between neural and behavioral changes.
The ability to localize visual objects is a fundamental component of human behavior and requires the integration of position information from object components. The retinal eccentricity of a stimulus and the locus of spatial attention can affect object localization, but it is unclear whether these factors alter the global localization of the object, the localization of object components, or both. We used psychophysical methods in humans to quantify behavioral responses in a centroid estimation task. Subjects located the centroid of briefly presented random dot patterns (RDPs). A peripheral cue was used to bias attention towards one side of the display. We found that although subjects were able to localize centroid positions reliably, they typically had a bias towards the fovea and a shift towards the locus of attention. We compared quantitative models that explain these effects either as biased global localization of the RDPs or as anisotropic integration of weighted dot component positions. A model that allowed retinal eccentricity and spatial attention to alter the weights assigned to individual dot positions best explained subjects’ performance. These results show that global position perception depends on both the retinal eccentricity of stimulus components and their positions relative to the current locus of attention.
Observer translation results in optic flow that specifies heading. Concurrent smooth pursuit causes distortion of the retinal flow pattern for which the visual system compensates. The distortion and its perceptual compensation are usually modeled in terms of instantaneous velocities. However, apart from adding a velocity to the flow field, pursuit also incrementally changes the direction of gaze. The effect of gaze displacement on optic flow perception has received little attention. Here we separated the effects of velocity and gaze displacement by measuring the perceived two-dimensional focus position of rotating flow patterns during pursuit. Such stimuli are useful in the current context because the two effects work in orthogonal directions. As expected, the instantaneous pursuit velocity shifted the perceived focus orthogonally to the pursuit direction. Additionally, the focus was mislocalized in the direction of the pursuit. Experiments that manipulated the presentation duration, flow speed, and uncertainty of the focus location, supported the idea that the latter component of mislocalization resulted from temporal integration of the retinal trajectory of the focus. Finally, a comparison of the shift magnitudes obtained in conditions with and without pursuit (but with similar retinal stimulation) suggested that the compensation for both effects uses extraretinal information.
Perceptual stability requires the integration of information across eye movements.
We first tested the hypothesis that motion signals are integrated by neurons whose receptive fields (RF) do not move with the eye, but stay fixed in the world. Specifically, we measured the RF properties of neurons in the middle temporal area (MT) of macaques (macaca mulatta) during the slow phase of optokinetic nystagmus. Using a novel method to estimate RF locations for both spikes and local field potentials, we found that the location on the retina that changed spike rates or local field potentials did not change with eye position; RFs moved with the eye.
Second, we tested the hypothesis that neurons link information across eye positions by remapping the retinal location of their RFs to future locations. To test this we compared RF locations during leftward and rightward slow phases of optokinetic nystagmus. We found no evidence for remapping during slow eye movements; the RF location was not affected by eye movement direction.
Taken together, our results show that RFs of MT neurons and the aggregate activity reflected in local field potentials are yoked to the eye during slow eye movements. This implies that individual MT neurons do not integrate sensory information from a single position in the world across eye movements. Future research will have to determine whether such integration, and the construction of perceptual stability, takes place in the form of a distributed population code in eye-centered visual cortex, or is deferred to downstream areas.
Eye movements; OKN; electrophysiology; receptive field mapping; reference frames; LFPs
Introspection makes it clear that we do not see the visual motion generated by the approximately three saccadic eye movements we make each second. We refer to the lack of awareness of the motion across the retina that is generated by a saccade as saccadic omission : the visual stimulus generated by the saccade is omitted from our subjective awareness. In the laboratory, saccadic omission is often studied by investigating saccadic suppression- the reduction in visual sensitivity prior to and during a saccade (see Ross et al. and Wurtz  for reviews). We investigated whether perceptual stability requires that a mechanism like saccadic suppression removes peri-saccadic stimuli from visual processing to prevent their presumed harmful effect on perceptual stability [4,5]. Our results show that a stimulus that undergoes saccadic suppression and is unseen can nevertheless generate a shape contrast illusion. The shape contrast illusion can be generated when the inducer and test stimulus are separated in space and is therefore thought to be generated at a later stage of visual processing . This shows that perceptual stability is attained without removing stimuli from processing and suggests a conceptually new view of perceptual stability in which peri-saccadic stimuli are processed by the early visual system, but these signals are prevented from reaching awareness at a later stage of processing.
Visual stimuli presented just before or during an eye movement are more difficult to detect than those same visual stimuli presented during fixation. This laboratory phenomenon – behavioral saccadic suppression – is thought to underlie the everyday experience of not perceiving the motion created by our own eye movements – saccadic omission. At the neural level, many cortical and subcortical areas respond differently to perisaccadic visual stimuli than to stimuli presented during fixation. Those neural response changes, however, are complex and the link to the behavioral phenomena of reduced detectability remains tentative. We employed a well-established model of human visual detection performance to provide a quantitative description of behavioral saccadic suppression and thereby allow a more focused search for its neural mechanisms.
We used an equivalent noise method to distinguish between three mechanisms that could underlie saccadic suppression. The first hypothesized mechanism reduces the gain of the visual system, the second increases internal noise levels in a stimulus dependent manner, and the third increases stimulus uncertainty. All three mechanisms predict that perisaccadic stimuli should be more difficult to detect, but each mechanism predicts a unique pattern of detectability as a function of the amount of external noise.
Our experimental finding was that saccades increased detection thresholds at low external noise, but had little influence on thresholds at high levels of external noise. A formal analysis of these data in the equivalent noise analysis framework showed that the most parsimonious mechanism underlying saccadic suppression is a stimulus-independent reduction in response gain.
Vision; Psychophysics; Equivalent noise analysis; Saccadic suppression
Human vision remains perceptually stable even though retinal inputs change rapidly with each eye movement. Although the neural basis of visual stability remains unknown, a recent psychophysical study pointed to the existence of visual feature-representations anchored in environmental rather than retinal coordinates (e.g. ‘spatiotopic’ receptive fields; Melcher, D., and Morrone, M.C. (2003). Spatiotopic temporal integration of visual motion across saccadic eye movements. Nat Neurosci 6, 877-881). In that study, sensitivity to a moving stimulus presented after a saccadic eye movement was enhanced when preceded by another moving stimulus at the same spatial location prior to the saccade. The finding is consistent with spatiotopic sensory integration, but it could also have arisen from a probabilistic improvement in performance due to the presence of more than one motion signal for the perceptual decision. Here we show that this statistical advantage accounts completely for summation effects in this task. We first demonstrate that measurements of summation are confounded by noise related to an observer's uncertainty about motion onset times. When this uncertainty is minimized, comparable summation is observed irrespective of whether two motion signals occupy the same or different locations in space, and whether they contain the same or opposite directions of motion. These results are incompatible with the tuning properties of motion-sensitive sensory neurons and provide no evidence for a spatiotopic representation of visual motion. Instead, summation in this context reflects a decision mechanism that uses abstract representations of sensory events to optimize choice behavior.
eye movements; vision; motion; spatial coding; saccade; psychophysics
We make fast, ballistic eye movements called saccades more often than our heart beats. Even though every saccade causes a large movement of the image of the environment on our retina, we never perceive this motion. This aspect of perceptual stability is often referred to as saccadic suppression: a reduction of visual sensitivity around the time of saccades. Here, we investigated the neural basis of this perceptual phenomenon with extracellular recordings from awake, behaving monkeys in the middle temporal (MT), medial superior temporal (MST), ventral intraparietal (VIP), and lateral intraparietal (LIP) areas. We found that in each of these areas the neural response to a visual stimulus changes around an eye movement. The perisaccadic response changes are qualitatively different in each of these areas, suggesting that they do not arise from a change in a common input area. Importantly, our data show that the suppression in the dorsal stream starts well before the eye movement. This clearly shows that the suppression is not just a consequence of the changes in visual input during the eye movement, but rather must involve a process that actively modulates neural activity just before a saccade.
saccade; macaque; extrastriate cortex; eye movement; vision; dorsal stream
The stability of visual perception is partly maintained by saccadic suppression: the selective reduction of visual sensitivity that accompanies rapid eye movements. The neural mechanisms responsible for this reduced perisaccadic visibility remain unknown, but the Lateral Geniculate Nucleus (LGN) has been proposed as a likely site. Our data show, however, that the saccadic suppression of a target flashed in the right visual hemifield increased with an increase in background luminance in the left visual hemifield. Because each LGN only receives retinal input from a single hemifield, this hemifield interaction cannot be explained solely on the basis of neural mechanisms operating in the LGN. Instead, this suggests that saccadic suppression must involve processing in higher level cortical areas that have access to a considerable part of the ipsilateral hemifield.
The mechanisms underlying visual perceptual stability are usually investigated using voluntary eye movements. In such studies, errors in perceptual stability during saccades and pursuit are commonly interpreted as mismatches between actual eye-position and eye-position signals in the brain. The generality of this interpretation could in principle be tested by investigating spatial localization during reflexive eye movements whose kinematics are very similar to those of voluntary eye movements.
Accordingly, in this study, we determined mislocalization of flashed visual targets during optokinetic afternystagmus (OKAN). These eye movements are quite unique in that they occur in complete darkness, and are generated by subcortical control mechanisms. We found that during horizontal OKAN slow-phases subjects mislocalize targets away from the fovea in the horizontal direction. This corresponds to a perceived expansion of visual space and is unlike mislocalization found for any other voluntary or reflexive eye movement. Around the OKAN fast-phases, we found a bias in the direction of the fast-phase prior to its onset and opposite to the fast-phase direction thereafter. Such a biphasic modulation has also been reported in the temporal vicinity of saccades, and during optokinetic nystagmus (OKN). A direct comparison, however, showed that the modulation during OKAN was much larger and occurred earlier relative to fast-phase onset than during OKN.
A simple mismatch between the current eye-position and the eye-position signal in the brain is unlikely to explain such disparate results across similar eye movements. Instead, these data support the view that mislocalization arises from errors in eye-centred position information.
localization; eye movement; OKAN; OKN; saccade; smooth pursuit