Eye-movement abnormalities in schizophrenia are a well-established phenomenon that has been observed in many studies. In such studies, visual targets are usually presented in the center of the visual field, and the subject's head remains fixed. However, in every-day life, targets may also appear in the periphery. This study is among the first to investigate eye and head movements in schizophrenia by presenting targets in the periphery of the visual field.
Two different visual recognition tasks, color recognition and Landolt orientation tasks, were presented at the periphery (at a visual angle of 55° from the center of the field of view). Each subject viewed 96 trials, and all eye and head movements were simultaneously recorded using video-based oculography and magnetic motion tracking of the head. Data from 14 patients with schizophrenia and 14 controls were considered. The patients had similar saccadic latencies in both tasks, whereas controls had shorter saccadic latencies in the Landolt task. Patients performed more head movements, and had increased eye-head offsets during combined eye-head shifts than controls.
Patients with schizophrenia may not be able to adapt to the two different tasks to the same extent as controls, as seen by the former's task-specific saccadic latency pattern. This can be interpreted as a specific oculomotoric attentional dysfunction and may support the hypothesis that schizophrenia patients have difficulties determining the relevance of stimuli. Patients may also show an uneconomic over-performance of head-movements, which is possibly caused by alterations in frontal executive function that impair the inhibition of head shifts. In addition, a model was created explaining 93% of the variance of the response times as a function of eye and head amplitude, which was only observed in the controls, indicating abnormal eye-head coordination in patients with schizophrenia.
Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation.
Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener’s physical coronal plane reached the location of one of the speakers (null point). In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point.
These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial shifts in the auditory receptive field locations driven by afferent signals from vestibular system.
Multiple dots moving independently back and forth on a flat screen induce a compelling illusion of a sphere rotating in depth (structure-from-motion). If all dots simultaneously reverse their direction of motion, two perceptual outcomes are possible: either the illusory rotation reverses as well (and the illusory depth of each dot is maintained), or the illusory rotation is maintained (but the illusory depth of each dot reverses). We investigated the role of attention in these ambiguous reversals. Greater availability of attention – as manipulated with a concurrent task or inferred from eye movement statistics – shifted the balance in favor of reversing illusory rotation (rather than depth). On the other hand, volitional control over illusory reversals was limited and did not depend on tracking individual dots during the direction reversal. Finally, display properties strongly influenced ambiguous reversals. Any asymmetries between ‘front’ and ‘back’ surfaces – created either on purpose by coloring or accidentally by random dot placement – also shifted the balance in favor of reversing illusory rotation (rather than depth). We conclude that the outcome of ambiguous reversals depends on attention, specifically on attention to the illusory sphere and its surface irregularities, but not on attentive tracking of individual surface dots.
Current assessment of visual neglect involves paper-and-pencil tests or computer-based tasks. Both have been criticised because of their lack of ecological validity as target stimuli can only be presented in a restricted visual range. This study examined the user-friendliness and diagnostic strength of a new “Circle-Monitor” (CM), which enlarges the range of the peripersonal space, in comparison to a standard paper-and-pencil test (Neglect-Test, NET).
Ten stroke patients with neglect and ten age-matched healthy controls were examined by the NET and the CM test comprising of four subtests (Star Cancellation, Line Bisection, Dice Task, and Puzzle Test).
The acceptance of the CM in elderly controls and neglect patients was high. Participants rated the examination by CM as clear, safe and more enjoyable than NET. Healthy controls performed at ceiling on all subtests, without any systematic differences between the visual fields. Both NET and CM revealed significant differences between controls and patients in Line Bisection, Star Cancellation and visuo-constructive tasks (NET: Figure Copying, CM: Puzzle Test). Discriminant analyses revealed cross-validated assignment of patients and controls to groups was more precise when based on the CM (hit rate 90%) as compared to the NET (hit rate 70%).
The CM proved to be a sensitive novel tool to diagnose visual neglect symptoms quickly and accurately with superior diagnostic validity compared to a standard neglect test while being well accepted by patients. Due to its upgradable functions the system may also be a valuable tool not only to test for non-visual neglect symptoms, but also to provide treatment and assess its outcome.
Here we explore the possibility that a core function of sensory cortex is the generation of an internal simulation of sensory environment in real-time. A logical elaboration of this idea leads to a dynamical neural architecture that oscillates between two fundamental network states, one driven by external input, and the other by recurrent synaptic drive in the absence of sensory input. Synaptic strength is modified by a proposed synaptic state matching (SSM) process that ensures equivalence of spike statistics between the two network states. Remarkably, SSM, operating locally at individual synapses, generates accurate and stable network-level predictive internal representations, enabling pattern completion and unsupervised feature detection from noisy sensory input. SSM is a biologically plausible substrate for learning and memory because it brings together sequence learning, feature detection, synaptic homeostasis, and network oscillations under a single unifying computational framework.
Observers often fail to notice even dramatic changes to their environment, a phenomenon known as change blindness. If training could enhance change detection performance in general, then it might help to remedy some real-world consequences of change blindness (e.g. failing to detect hazards while driving). We examined whether adaptive training on a simple change detection task could improve the ability to detect changes in untrained tasks for young and older adults. Consistent with an effective training procedure, both young and older adults were better able to detect changes to trained objects following training. However, neither group showed differential improvement on untrained change detection tasks when compared to active control groups. Change detection training led to improvements on the trained task but did not generalize to other change detection tasks.
The ability to devote attention simultaneously to multiple visual objects plays an important role in domains ranging from everyday activities to the workplace. Yet, no studies have systematically explored the fixation strategies that optimize attention to two spatially distinct objects. Assuming the two objects require attention nearly simultaneously, subjects either could fixate one object or they could fixate between the objects. Studies measuring the breadth of attention have focused almost exclusively on the former strategy, by having subjects simultaneously perform one attention-demanding task at fixation and another in the periphery. We compared performance when one object was at fixation and the other was in the periphery to a condition in which both objects were in the periphery and subjects fixated between them. Performance was better with two peripheral stimuli than with one central and one peripheral stimulus, meaning that a strategy of fixating between stimuli permitted greater attention breadth. Consistent with the idea that both measures tap attention breadth, sport experts consistently outperformed novices with both fixation strategies. Our findings suggest a way to improve performance when observers must pay attention to multiple objects across spatial regions. We discuss possible explanations for this performance advantage.
Detection of visual contours (strings of small oriented elements) is markedly poor in schizophrenia. This has previously been attributed to an inability to group local information across space into a global percept. Here, we show that this failure actually originates from a combination of poor encoding of local orientation and abnormal processing of visual context.
We measured the ability of observers with schizophrenia to localise contours embedded in backgrounds of differently oriented elements (either randomly oriented, near-parallel or near-perpendicular to the contour). In addition, we measured patients’ ability to process local orientation information (i.e., report the orientation of an individual element) for both isolated and crowded elements (i.e., presented with nearby distractors).
While patients are poor at detecting contours amongst randomly oriented elements, they are proportionally less disrupted (compared to unaffected controls) when contour and surrounding elements have similar orientations (near-parallel condition). In addition, patients are poor at reporting the orientation of an individual element but, again, are less prone to interference from nearby distractors, a phenomenon known as visual crowding.
We suggest that patients’ poor performance at contour perception arises not as a consequence of an “integration deficit” but from a combination of reduced sensitivity to local orientation and abnormalities in contextual processing. We propose that this is a consequence of abnormal gain control, a phenomenon that has been implicated in orientation-selectivity as well as surround suppression.
Eye exercises have been prescribed to resolve a multitude of eye-related problems. However, studies on the efficacy of eye exercises are lacking, mainly due to the absence of simple assessment tools in the clinic. Because similar regions of the brain are responsible for eye movements and visual attention, we used a modified rapid serial visual presentation (RSVP) to assess any measurable effect of short-term eye exercise in improvements within these domains. In the present study, twenty subjects were equally divided into control and experimental groups, each of which performed a pre-training RSVP assessment where target letters, to which subjects were asked to respond to by pressing a spacebar, were serially and rapidly presented. Response time to target letters, accuracy of correctly responding to target letters, and correct identification of target letters in each of 12 sessions was measured. The experimental group then performed active eye exercises, while the control group performed a task that minimized eye movements for 18.5 minutes. A final post-training RSVP assessment was performed by both groups and response time, accuracy, and letter identification were compared between and within subject groups both pre- and post-training. Subjects who performed eye exercises were more accurate in responding to target letters separated by one distractor and in letter identification in the post-training RSVP assessment, while latency of responses were unchanged between and within groups. This suggests that eye exercises may prove useful in enhancing cognitive performance on tasks related to attention and memory over a very brief course of training, and RSVP may be a useful measure of this efficacy. Further research is needed on eye exercises to determine whether they are an effective treatment for patients with cognitive and eye-related disorders.
Initiating an eye movement towards a suddenly appearing visual target is faster when an accessory auditory stimulus occurs in close spatiotemporal vicinity. Such facilitation of saccadic reaction time (SRT) is well-documented, but the exact neural mechanisms underlying the crossmodal effect remain to be elucidated. From EEG/MEG studies it has been hypothesized that coupled oscillatory activity in primary sensory cortices regulates multisensory processing. Specifically, it is assumed that the phase of an ongoing neural oscillation is shifted due to the occurrence of a sensory stimulus so that, across trials, phase values become highly consistent (phase reset). If one can identify the phase an oscillation is reset to, it is possible to predict when temporal windows of high and low excitability will occur. However, in behavioral experiments the pre-stimulus phase will be different on successive repetitions of the experimental trial, and average performance over many trials will show no signs of the modulation. Here we circumvent this problem by repeatedly presenting an auditory accessory stimulus followed by a visual target stimulus with a temporal delay varied in steps of 2 ms. Performing a discrete time series analysis on SRT as a function of the delay, we provide statistical evidence for the existence of distinct peak spectral components in the power spectrum. These frequencies, although varying across participants, fall within the beta and gamma range (20 to 40 Hz) of neural oscillatory activity observed in neurophysiological studies of multisensory integration. Some evidence for high-theta/alpha activity was found as well. Our results are consistent with the phase reset hypothesis and demonstrate that it is amenable to testing by purely psychophysical methods. Thus, any theory of multisensory processes that connects specific brain states with patterns of saccadic responses should be able to account for traces of oscillatory activity in observable behavior.
The ability to understand and predict others’ behavior is essential for successful interactions. When making predictions about what other humans will do, we treat them as intentional systems and adopt the intentional stance, i.e., refer to their mental states such as desires and intentions. In the present experiments, we investigated whether the mere belief that the observed agent is an intentional system influences basic social attention mechanisms. We presented pictures of a human and a robot face in a gaze cuing paradigm and manipulated the likelihood of adopting the intentional stance by instruction: in some conditions, participants were told that they were observing a human or a robot, in others, that they were observing a human-like mannequin or a robot whose eyes were controlled by a human. In conditions in which participants were made to believe they were observing human behavior (intentional stance likely) gaze cuing effects were significantly larger as compared to conditions when adopting the intentional stance was less likely. This effect was independent of whether a human or a robot face was presented. Therefore, we conclude that adopting the intentional stance when observing others’ behavior fundamentally influences basic mechanisms of social attention. The present results provide striking evidence that high-level cognitive processes, such as beliefs, modulate bottom-up mechanisms of attentional selection in a top-down manner.
Attentional mechanisms are a crucial prerequisite to organize behavior. Most situations may be characterized by a ‘competition’ between salient, but irrelevant stimuli and less salient, relevant stimuli. In such situations top-down and bottom-up mechanisms interact with each other. In the present fMRI study, we examined how interindividual differences in resolving situations of perceptual conflict are reflected in brain networks mediating attentional selection. Doing so, we employed a change detection task in which subjects had to detect luminance changes in the presence and absence of competing distractors. The results show that good performers presented increased activation in the orbitofrontal cortex (BA 11), anterior cingulate (BA 25), inferior parietal lobule (BA 40) and visual areas V2 and V3 but decreased activation in BA 39. This suggests that areas mediating top-down attentional control are stronger activated in this group. Increased activity in visual areas reflects distinct neuronal enhancement relating to selective attentional mechanisms in order to solve the perceptual conflict. Opposed to good performers, brain areas activated by poor performers comprised the left inferior parietal lobule (BA 39) and fronto-parietal and visual regions were continuously deactivated, suggesting that poor performers perceive stronger conflict than good performers. Moreover, the suppression of neural activation in visual areas might indicate a strategy of poor performers to inhibit the processing of the irrelevant non-target feature. These results indicate that high sensitivity in perceptual areas and increased attentional control led to less conflict in stimulus processing and consequently to higher performance in competitive attentional selection.
In studies of change blindness, observers often have the phenomenological impression that the blindness is overcome all at once, so that change detection, localization and identification apparently occur together. Three experiments are described that explore dissociations between these processes using a discrete trial procedure in which 2 visual frames are presented sequentially with no intervening inter-frame-interval. The results reveal that change detection and localization are essentially perfect under these conditions regardless of the number of elements in the display, which is consistent with the idea that change detection and localization are mediated by pre-attentive parallel processes.
In contrast, identification accuracy for an item before it changes is generally poor, and is heavily dependent on the number of items displayed. Identification accuracy after a change is substantially better, but depends on the new item's duration. This suggests that the change captures attention, which substantially enhances the likelihood of correctly identifying the new item. However, the results also reveal a limited capacity to identify unattended items. Specifically, we provide evidence that strongly suggests that, at least under these conditions, observers were able to identify two items without focused attention. Our results further suggest that spatial pre-cues that attract attention to an item before the change occurs simply ensure that the cued item is one of the two whose identity is encoded.
Spatial relations are commonly divided in two global classes. Categorical relations concern abstract relations which define areas of spatial equivalence, whereas coordinate relations are metric and concern exact distances. Categorical and coordinate relation processing are thought to rely on at least partially separate neurocognitive mechanisms, as reflected by differential lateralization patterns, in particular in the parietal cortex. In this study we address this textbook principle from a new angle. We studied retinotopic activation in early visual cortex, as a reflection of attentional distribution, in a spatial working memory task with either a categorical or a coordinate instruction. Participants were asked to memorize a dot position, with regard to a central cross, and to indicate whether a subsequent dot position matched the first dot position, either categorically (opposite quadrant of the cross) or coordinately (same distance to the centre of the cross). BOLD responses across the retinotopic maps of V1, V2, and V3 indicate that the spatial distribution of cortical activity was different for categorical and coordinate instructions throughout the retention interval; a more local focus was found during categorical processing, whereas focus was more global for coordinate processing. This effect was strongest for V3, approached significance in V2 and was absent in V1. Furthermore, during stimulus presentation the two instructions led to different levels of activation in V3 during stimulus encoding; a stronger increase in activity was found for categorical processing. Together this is the first demonstration that instructions for specific types of spatial relations may yield distinct attentional patterns which are already reflected in activity early in the visual cortex.
A combination of oculometric measurements, invasive electrophysiological recordings and microstimulation have proven instrumental to study the role of the Frontal Eye Field (FEF) in saccadic activity. We hereby gauged the ability of a non-invasive neurostimulation technology, Transcranial Magnetic Stimulation (TMS), to causally interfere with frontal activity in two macaque rhesus monkeys trained to perform a saccadic antisaccade task. We show that online single pulse TMS significantly modulated antisaccade latencies. Such effects proved dependent on TMS site (effects on FEF but not on an actively stimulated control site), TMS modality (present under active but not sham TMS on the FEF area), TMS intensity (intensities of at least 40% of the TMS machine maximal output required), TMS timing (more robust for pulses delivered at 150 ms than at 100 post target onset) and visual hemifield (relative latency decreases mainly for ipsilateral AS). Our results demonstrate the feasibility of using TMS to causally modulate antisaccade-associated computations in the non-human primate brain and support the use of this approach in monkeys to study brain function and its non-invasive neuromodulation for exploratory and therapeutic purposes.
The ability to find and evade fighting persons in a crowd is potentially life-saving. To investigate how the visual system processes threatening actions, we employed a visual search paradigm with threatening boxer targets among emotionally-neutral walker distractors, and vice versa. We found that a boxer popped out for both intact and scrambled actions, whereas walkers did not. A reverse correlation analysis revealed that observers' responses clustered around the time of the “punch", a signature movement of boxing actions, but not around specific movements of the walker. These findings support the existence of a detector for signature movements in action perception. This detector helps in rapidly detecting aggressive behavior in a crowd, potentially through an expedited (sub)cortical threat-detection mechanism.
Exogenous attention can be understood as an adaptive tool that permits the detection and processing of biologically salient events even when the individual is engaged in a resource-consuming task. Indirect data suggest that the spatial frequency of stimulation may be a crucial element in this process. Behavioral and neural data (both functional and structural) were analyzed for 36 participants engaged in a digit categorization task in which distracters were presented. Distracters were biologically salient or anodyne images, and had three spatial frequency formats: intact, low spatial frequencies only, and high spatial frequencies only. Behavior confirmed enhanced exogenous attention to biologically salient distracters. The activity in the right and left intraparietal sulci and the right middle frontal gyrus was associated with this behavioral pattern and was greater in response to salient than to neutral distracters, the three areas presenting strong correlations to each other. Importantly, the enhanced response of this network to biologically salient distracters with respect to neutral distracters relied on low spatial frequencies to a significantly greater extent than on high spatial frequencies. Structural analyses suggested the involvement of internal capsule, superior longitudinal fasciculus and corpus callosum in this network. Results confirm that exogenous attention is preferentially captured by biologically salient information, and suggest that the architecture and function underlying this process are low spatial frequency-biased.
Amblyopia is a developmental abnormality that results in deficits for a wide range of visual tasks, most notably, the reduced ability to see fine details, the loss in contrast sensitivity especially for small objects and the difficulty in seeing objects in clutter (crowding). The primary goal of this study was to evaluate whether crowding can be ameliorated in adults with amblyopia through perceptual learning using a flanked letter identification task that was designed to reduce crowding, and if so, whether the improvements transfer to untrained visual functions: visual acuity, contrast sensitivity and the size of visual span (the amount of information obtained in one fixation). To evaluate whether the improvements following this training task were specific to training with flankers, we also trained another group of adult observers with amblyopia using a single letter identification task that was designed to improve letter contrast sensitivity, not crowding. Following 10,000 trials of training, both groups of observers showed improvements in the respective training task. The improvements generalized to improved visual acuity, letter contrast sensitivity, size of the visual span, and reduced crowding. The magnitude of the improvement for each of these measurements was similar in the two training groups. Perceptual learning regimens aimed at reducing crowding or improving letter contrast sensitivity are both effective in improving visual acuity, contrast sensitivity for near-acuity objects and reducing the crowding effect, and could be useful as a clinical treatment for amblyopia.
Previous research suggests that visual attention can be allocated to locations in space (space-based attention) and to objects (object-based attention). The cueing effects associated with space-based attention tend to be large and are found consistently across experiments. Object-based attention effects, however, are small and found less consistently across experiments. In three experiments we address the possibility that variability in object-based attention effects across studies reflects low incidence of such effects at the level of individual subjects. Experiment 1 measured space-based and object-based cueing effects for horizontal and vertical rectangles in 60 subjects comparing commonly used target detection and discrimination tasks. In Experiment 2 we ran another 120 subjects in a target discrimination task in which rectangle orientation varied between subjects. Using parametric statistical methods, we found object-based effects only for horizontal rectangles. Bootstrapping methods were used to measure effects in individual subjects. Significant space-based cueing effects were found in nearly all subjects in both experiments, across tasks and rectangle orientations. However, only a small number of subjects exhibited significant object-based cueing effects. Experiment 3 measured only object-based attention effects using another common paradigm and again, using bootstrapping, we found only a small number of subjects that exhibited significant object-based cueing effects. Our results show that object-based effects are more prevalent for horizontal rectangles, which is in accordance with the theory that attention may be allocated more easily along the horizontal meridian. The fact that so few individuals exhibit a significant object-based cueing effect presumably is why previous studies of this effect might have yielded inconsistent results. The results from the current study highlight the importance of considering individual subject data in addition to commonly used statistical methods.