PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1025170)

Clipboard (0)
None

Related Articles

1.  Psychophysical Tests of the Hypothesis of a Bottom-Up Saliency Map in Primary Visual Cortex 
PLoS Computational Biology  2007;3(4):e62.
A unique vertical bar among horizontal bars is salient and pops out perceptually. Physiological data have suggested that mechanisms in the primary visual cortex (V1) contribute to the high saliency of such a unique basic feature, but indicated little regarding whether V1 plays an essential or peripheral role in input-driven or bottom-up saliency. Meanwhile, a biologically based V1 model has suggested that V1 mechanisms can also explain bottom-up saliencies beyond the pop-out of basic features, such as the low saliency of a unique conjunction feature such as a red vertical bar among red horizontal and green vertical bars, under the hypothesis that the bottom-up saliency at any location is signaled by the activity of the most active cell responding to it regardless of the cell's preferred features such as color and orientation. The model can account for phenomena such as the difficulties in conjunction feature search, asymmetries in visual search, and how background irregularities affect ease of search. In this paper, we report nontrivial predictions from the V1 saliency hypothesis, and their psychophysical tests and confirmations. The prediction that most clearly distinguishes the V1 saliency hypothesis from other models is that task-irrelevant features could interfere in visual search or segmentation tasks which rely significantly on bottom-up saliency. For instance, irrelevant colors can interfere in an orientation-based task, and the presence of horizontal and vertical bars can impair performance in a task based on oblique bars. Furthermore, properties of the intracortical interactions and neural selectivities in V1 predict specific emergent phenomena associated with visual grouping. Our findings support the idea that a bottom-up saliency map can be at a lower visual area than traditionally expected, with implications for top-down selection mechanisms.
Author Summary
Only a fraction of visual input can be selected for attentional scrutiny, often by focusing on a limited extent of the visual space. The selected location is often determined by the bottom-up visual inputs rather than the top-down intentions. For example, a red dot among green ones automatically attracts attention and is said to be salient. Physiological data have suggested that the primary visual cortex (V1) in the brain contributes to creating such bottom-up saliencies from visual inputs, but indicated little on whether V1 plays an essential or peripheral role in creating a saliency map of the input space to guide attention. Traditional psychological frameworks, based mainly on behavioral data, have implicated higher-level brain areas for the saliency map. Recently, it has been hypothesized that V1 creates this saliency map, such that the image location whose visual input evokes the highest response among all V1 output neurons is most likely selected from a visual scene for attentional processing. This paper derives nontrivial predictions from this hypothesis and presents their psychophysical tests and confirmations. Our findings suggest that bottom-up saliency is computed at a lower brain area than previously expected, and have implications on top-down attentional mechanisms.
doi:10.1371/journal.pcbi.0030062
PMCID: PMC1847698  PMID: 17411335
2.  Detection is unaffected by the deployment of focal attention 
There has been much debate regarding how much information humans can extract from their environment without the use of limited attentional resources. In a recent study, Theeuwes et al. (2008) argued that even detection of simple feature targets is not possible without selection by focal attention. Supporting this claim, they found response time (RT) benefits in a simple feature (color) detection task when a target letter's identity was repeated on consecutive trials, suggesting that the letter was selected by focal attention and identified prior to detection. This intertrial repetition benefit remained even when observers were required to simultaneously identify a central digit. However, we found that intertrial repetition benefits disappeared when a simple color target was presented among a heterogeneously (rather than homogeneously) colored set of distractors, thus reducing its bottom–up salience. Still, detection performance remained high. Thus, detection performance was unaffected by whether a letter was focally attended and identified prior to detection or not. Intertrial identity repetition benefits also disappeared when observers were required to perform a simultaneous, attention-demanding central task (Experiment 2), or when unfamiliar Chinese characters were used (Experiment 3). Together, these results suggest that while shifts of focal attention can be affected by target salience, by the availability of excess cognitive resources, and by target familiarity, detection performance itself is unaffected by these manipulations and is thus unaffected by the deployment of focal attention.
doi:10.3389/fpsyg.2013.00284
PMCID: PMC3664323  PMID: 23750142
focal attention; perception; salience; locus of selection; priming
3.  The Theory-based Influence of Map Features on Risk Beliefs: Self-reports of What is Seen and Understood for Maps Depicting an Environmental Health Hazard 
Journal of health communication  2012;17(7):836-856.
Theory-based research is needed to understand how maps of environmental health risk information influence risk beliefs and protective behavior. Using theoretical concepts from multiple fields of study including visual cognition, semiotics, health behavior, and learning and memory supports a comprehensive assessment of this influence. We report results from thirteen cognitive interviews that provide theory-based insights into how visual features influenced what participants saw and the meaning of what they saw as they viewed three formats of water test results for private wells (choropleth map, dot map, and a table). The unit of perception, color, proximity to hazards, geographic distribution, and visual salience had substantial influences on what participants saw and their resulting risk beliefs. These influences are explained by theoretical factors that shape what is seen, properties of features that shape cognition (pre-attentive, symbolic, visual salience), information processing (top-down and bottom-up), and the strength of concrete compared to abstract information. Personal relevance guided top-down attention to proximal and larger hazards that shaped stronger risk beliefs. Meaning was more local for small perceptual units and global for large units. Three aspects of color were important: pre-attentive “incremental risk” meaning of sequential shading, symbolic safety meaning of stoplight colors, and visual salience that drew attention. The lack of imagery, geographic information, and color diminished interest in table information. Numeracy and prior beliefs influenced comprehension for some participants. Results guided the creation of an integrated conceptual framework for application to future studies. Ethics should guide the selection of map features that support appropriate communication goals.
doi:10.1080/10810730.2011.650933
PMCID: PMC3656721  PMID: 22715919
risk communication; visual communication; visual cognition; environmental health; health behavior; hazard proximity
4.  Implicit learning modulates attention capture: evidence from an item-specific proportion congruency manipulation 
A host of research has now shown that our explicit goals and intentions can, in large part, overcome the capture of visual attention by objects that differ from their surroundings in terms of size, shape, or color. Surprisingly however, there is little evidence for the role of implicit learning in mitigating capture effects despite the fact that such learning has been shown to strongly affect behavior in a host of other performance domains. Here, we employ a modified attention capture paradigm, based on the work of Theeuwes (1991, 1992), in which participants must search for an odd-shaped target amongst homogeneous distracters. On each trial, there is also a salient, but irrelevant odd-colored distracter. Across the experiments reported, we intermix two search contexts: for one set of distracters (e.g., squares) the shape singleton and color singleton coincide on a majority of trials (high proportion congruent condition), whereas for the other set of distracters (e.g., circles) the shape and color singletons are highly unlikely to coincide (low proportion congruent condition). Crucially, we find that observers learn to allow the capture of attention by the salient distracter to a greater extent in the high, compared to the low proportion congruent condition, albeit only when search is sufficiently difficult. Moreover, this effect of prior experience on search behavior occurs in the absence of awareness of our proportion manipulation. We argue that low-level properties of the search displays recruit representations of prior experience in a rapid, flexible, and implicit manner.
doi:10.3389/fpsyg.2014.00551
PMCID: PMC4044972  PMID: 24926280
attention capture; implicit learning; visual search; proportion congruency; episodic retrieval
5.  Affective Salience Can Reverse the Effects of Stimulus-Driven Salience on Eye Movements in Complex Scenes 
In natural vision both stimulus features and cognitive/affective factors influence an observer’s attention. However, the relationship between stimulus-driven (“bottom-up”) and cognitive/affective (“top-down”) factors remains controversial: Can affective salience counteract strong visual stimulus signals and shift attention allocation irrespective of bottom-up features? Is there any difference between negative and positive scenes in terms of their influence on attention deployment? Here we examined the impact of affective factors on eye movement behavior, to understand the competition between visual stimulus-driven salience and affective salience and how they affect gaze allocation in complex scene viewing. Building on our previous research, we compared predictions generated by a visual salience model with measures indexing participant-identified emotionally meaningful regions of each image. To examine how eye movement behavior differs for negative, positive, and neutral scenes, we examined the influence of affective salience in capturing attention according to emotional valence. Taken together, our results show that affective salience can override stimulus-driven salience and overall emotional valence can determine attention allocation in complex scenes. These findings are consistent with the hypothesis that cognitive/affective factors play a dominant role in active gaze control.
doi:10.3389/fpsyg.2012.00336
PMCID: PMC3457078  PMID: 23055990
affective salience; visual salience; eye movements; attention; top-down; bottom-up; stimulus-driven; regions of interest
6.  Influence of Low-Level Stimulus Features, Task Dependent Factors, and Spatial Biases on Overt Visual Attention 
PLoS Computational Biology  2010;6(5):e1000791.
Visual attention is thought to be driven by the interplay between low-level visual features and task dependent information content of local image regions, as well as by spatial viewing biases. Though dependent on experimental paradigms and model assumptions, this idea has given rise to varying claims that either bottom-up or top-down mechanisms dominate visual attention. To contribute toward a resolution of this discussion, here we quantify the influence of these factors and their relative importance in a set of classification tasks. Our stimuli consist of individual image patches (bubbles). For each bubble we derive three measures: a measure of salience based on low-level stimulus features, a measure of salience based on the task dependent information content derived from our subjects' classification responses and a measure of salience based on spatial viewing biases. Furthermore, we measure the empirical salience of each bubble based on our subjects' measured eye gazes thus characterizing the overt visual attention each bubble receives. A multivariate linear model relates the three salience measures to overt visual attention. It reveals that all three salience measures contribute significantly. The effect of spatial viewing biases is highest and rather constant in different tasks. The contribution of task dependent information is a close runner-up. Specifically, in a standardized task of judging facial expressions it scores highly. The contribution of low-level features is, on average, somewhat lower. However, in a prototypical search task, without an available template, it makes a strong contribution on par with the two other measures. Finally, the contributions of the three factors are only slightly redundant, and the semi-partial correlation coefficients are only slightly lower than the coefficients for full correlations. These data provide evidence that all three measures make significant and independent contributions and that none can be neglected in a model of human overt visual attention.
Author Summary
In our lifetime we make about 5 billion eye movements. Yet our knowledge about what determines where we look at is surprisingly sketchy. Some traditional approaches assume that gaze is guided by simple image properties like local contrast (low-level features). Recent arguments emphasize the influence of tasks (high-level features) and motor constraints (spatial bias). The relative importance of these factors is still a topic of debate. In this study, subjects view and classify natural scenery and faces while their eye movements are recorded. The stimuli are composed of small image patches. For each of these patches we derive a measure for low-level features and spatial bias. Utilizing the subjects' classification responses, we additionally derive a measure reflecting the information content of a patch with respect to the classification task (high-level features). We show that the effect of spatial bias is highest, that high-level features are a close runner-up, and that low-level features have, on average, a smaller influence. Remarkably, the different contributions are mostly independent. Hence, all three measures contribute to the guidance of eye movements and have to be considered in a model of human visual attention.
doi:10.1371/journal.pcbi.1000791
PMCID: PMC2873902  PMID: 20502672
7.  The effect of linguistic and visual salience in visual world studies 
Research using the visual world paradigm has demonstrated that visual input has a rapid effect on language interpretation tasks such as reference resolution and, conversely, that linguistic material—including verbs, prepositions and adjectives—can influence fixations to potential referents. More recent research has started to explore how this effect of linguistic input on fixations is mediated by properties of the visual stimulus, in particular by visual salience. In the present study we further explored the role of salience in the visual world paradigm manipulating language-driven salience and visual salience. Specifically, we tested how linguistic salience (i.e., the greater accessibility of linguistically introduced entities) and visual salience (bottom-up attention grabbing visual aspects) interact. We recorded participants' eye-movements during a MapTask, asking them to look from landmark to landmark displayed upon a map while hearing direction-giving instructions. The landmarks were of comparable size and color, except in the Visual Salience condition, in which one landmark had been made more visually salient. In the Linguistic Salience conditions, the instructions included references to an object not on the map. Response times and fixations were recorded. Visual Salience influenced the time course of fixations at both the beginning and the end of the trial but did not show a significant effect on response times. Linguistic Salience reduced response times and increased fixations to landmarks when they were associated to a Linguistic Salient entity not present itself on the map. When the target landmark was both visually and linguistically salient, it was fixated longer, but fixations were quicker when the target item was linguistically salient only. Our results suggest that the two types of salience work in parallel and that linguistic salience affects fixations even when the entity is not visually present.
doi:10.3389/fpsyg.2014.00176
PMCID: PMC3941304  PMID: 24624108
linguistic salience; visual salience; visual world paradigm; centering theory; saliency map
8.  Occipital Alpha Activity during Stimulus Processing Gates the Information Flow to Object-Selective Cortex 
PLoS Biology  2014;12(10):e1001965.
A simultaneous EEG-fMRI study demonstrates that alpha-band activity in early visual cortex is associated with gating visual information to downstream regions, boosting attended information and suppressing distraction.
Given the limited processing capabilities of the sensory system, it is essential that attended information is gated to downstream areas, whereas unattended information is blocked. While it has been proposed that alpha band (8–13 Hz) activity serves to route information to downstream regions by inhibiting neuronal processing in task-irrelevant regions, this hypothesis remains untested. Here we investigate how neuronal oscillations detected by electroencephalography in visual areas during working memory encoding serve to gate information reflected in the simultaneously recorded blood-oxygenation-level-dependent (BOLD) signals recorded by functional magnetic resonance imaging in downstream ventral regions. We used a paradigm in which 16 participants were presented with faces and landscapes in the right and left hemifields; one hemifield was attended and the other unattended. We observed that decreased alpha power contralateral to the attended object predicted the BOLD signal representing the attended object in ventral object-selective regions. Furthermore, increased alpha power ipsilateral to the attended object predicted a decrease in the BOLD signal representing the unattended object. We also found that the BOLD signal in the dorsal attention network inversely correlated with visual alpha power. This is the first demonstration, to our knowledge, that oscillations in the alpha band are implicated in the gating of information from the visual cortex to the ventral stream, as reflected in the representationally specific BOLD signal. This link of sensory alpha to downstream activity provides a neurophysiological substrate for the mechanism of selective attention during stimulus processing, which not only boosts the attended information but also suppresses distraction. Although previous studies have shown a relation between the BOLD signal from the dorsal attention network and the alpha band at rest, we demonstrate such a relation during a visuospatial task, indicating that the dorsal attention network exercises top-down control of visual alpha activity.
Author Summary
In complex environments, our sensory systems are bombarded with information. Only a fraction of this information is processed, whereas most is ignored. As such, our brain must rely on powerful mechanisms to filter the relevant information. It has been proposed that alpha band oscillations (8–13 Hz) gate task-relevant visual information to downstream areas and supress irrelevant visual information. We tested this hypothesis in a study that combined electroencephalography (EEG) and functional MRI (fMRI) recordings. From the EEG, we directly measured alpha band oscillations in early visual regions. Using fMRI, we quantified neuronal activity in downstream regions. The participants performed a spatial working memory task that required them to encode pictures of objects presented in the left field of view while ignoring objects in the right field (or vice versa). We found that suppression of alpha band activity in visual areas opened the gate for relevant visual information to be routed to downstream regions. Conversely, an increase in alpha oscillations suppressed visual information that was irrelevant to the task. These findings suggest that alpha band oscillations are directly involved in boosting attended information and suppressing distraction in the ventral visual stream.
doi:10.1371/journal.pbio.1001965
PMCID: PMC4205112  PMID: 25333286
9.  A Systematic Review of Studies That Aim to Determine Which Outcomes to Measure in Clinical Trials in Children  
PLoS Medicine  2008;5(4):e96.
Background
In clinical trials the selection of appropriate outcomes is crucial to the assessment of whether one intervention is better than another. Selection of inappropriate outcomes can compromise the utility of a trial. However, the process of selecting the most suitable outcomes to include can be complex. Our aim was to systematically review studies that address the process of selecting outcomes or outcome domains to measure in clinical trials in children.
Methods and Findings
We searched Cochrane databases (no date restrictions) in December 2006; and MEDLINE (1950 to 2006), CINAHL (1982 to 2006), and SCOPUS (1966 to 2006) in January 2007 for studies of the selection of outcomes for use in clinical trials in children. We also asked a group of experts in paediatric clinical research to refer us to any other relevant studies. From these articles we extracted data on the clinical condition of interest, description of the method used to select outcomes, the people involved in the selection process, the outcomes selected, and limitations of the method as defined by the authors. The literature search identified 8,889 potentially relevant abstracts. Of these, 70 were retrieved, and 25 were included in the review. These studies described the work of 13 collaborations representing various paediatric specialties including critical care, gastroenterology, haematology, psychiatry, neurology, respiratory paediatrics, rheumatology, neonatal medicine, and dentistry. Two groups utilised the Delphi technique, one used the nominal group technique, and one used both methods to reach a consensus about which outcomes should be measured in clinical trials. Other groups used semistructured discussion, and one group used a questionnaire-based survey. The collaborations involved clinical experts, research experts, and industry representatives. Three groups involved parents of children affected by the particular condition.
Conclusions
Very few studies address the appropriate choice of outcomes for clinical research with children, and in most paediatric specialties no research has been undertaken. Among the studies we did assess, very few involved parents or children in selecting outcomes that should be measured, and none directly involved children. Research should be undertaken to identify the best way to involve parents and children in assessing which outcomes should be measured in clinical trials.
Ian Sinha and colleagues show, in a systematic review of published studies, that there are very few studies that address the appropriate choice of outcomes for clinical research with children.
Editors' Summary
Background.
When adult patients are given a drug for a disease by their doctors, they can be sure that its benefits and harms will have been carefully studied in clinical trials. Clinical researchers will have asked how well the drug does when compared to other drugs by giving groups of patients the various treatments and determining several “outcomes.” These are measurements carefully chosen in advance by clinical experts that ensure that trials provide as much information as possible about how effectively a drug deals with a specific disease and whether it has any other effects on patients' health and daily life. The situation is very different, however, for pediatric (child) patients. About three-quarters of the drugs given to children are “off-label”—they have not been specifically tested in children. The assumption used to be that children are just small people who can safely take drugs tested in adults provided the dose is scaled down. However, it is now known that children's bodies handle many drugs differently from adult bodies and that a safe dose for an adult can sometimes kill a child even after scaling down for body size. Consequently, regulatory bodies in the US, Europe, and elsewhere now require clinical trials to be done in children and drugs for pediatric use to be specifically licensed.
Why Was This Study Done?
Because children are not small adults, the methodology used to design trials involving children needs to be adapted from that used to design trials in adult patients. In particular, the process of selecting the outcomes to include in pediatric trials needs to take into account the differences between adults and children. For example, because children's brains are still developing, it may be important to include outcome measures that will detect any effect that drugs have on intellectual development. In this study, therefore, the researchers undertook a systematic review of the medical literature to discover how much is known about the best way to select outcomes in clinical trials in children.
What Did the Researchers Do and Find?
The researchers used a predefined search strategy to identify all the studies published since 1950 that examined the selection of outcomes in clinical trials in children. They also asked experts in pediatric clinical research for details of relevant studies. Only 25 studies, which covered several pediatric specialties and were published by 13 collaborative groups, met the strict eligibility criteria laid down by the researchers for their systematic review. Several approaches previously used to choose outcomes in clinical trials in adults were used in these studies to select outcomes. Two groups used the “Delphi” technique, in which opinions are sought from individuals, collated, and fed back to the individuals to generate discussion and a final, consensus agreement. One group used the “nominal group technique,” which involves the use of structured face-to-face discussions to develop a solution to a problem followed by a vote. Another group used both methods. The remaining groups (except one that used a questionnaire) used semistructured discussion meetings or workshops to decide on outcomes. Although most of the groups included clinical experts, people doing research on the specific clinical condition under investigation, and industry representatives, only three groups asked parents about which outcomes should be included in the trials, and none asked children directly.
What Do These Findings Mean?
These findings indicate that very few studies have addressed the selection of appropriate outcomes for clinical research in children. Indeed, in many pediatric specialties no research has been done on this important topic. Importantly, some of the studies included in this systematic review clearly show that it is inappropriate to use the outcomes used in adult clinical trials in pediatric populations. Overall, although the studies identified in this review provide some useful information on the selection of outcomes in clinical trials in children, further research is urgently needed to ensure that this process is made easier and more uniform. In particular, much more research must be done to determine the best way to involve children and their parents in the selection of outcomes.
Additional Information.
Please access these Web sites via the online version of this summary at http://dx.doi.org/10.1371/journal.pmed.0050096.
A related PLoSMedicine Perspective article is available
The European Medicines Agency provides information about the regulation of medicines for children in Europe
The US Food and Drug Administration Office of Pediatric Therapeutics provides similar information for the US
The UK Medicines and Healthcare products Regulatory Agency also provides information on why medicines need to be tested in children
The UK Medicines for Children Research Network aims to facilitate the conduct of clinical trials of medicines for children
The James Lind Alliance has been established in the UK to increase patient involvement in medical research issues such as outcome selection in clinical trials
doi:10.1371/journal.pmed.0050096
PMCID: PMC2346505  PMID: 18447577
10.  EFFECTIVE CONNECTIVITY DURING HAPTIC PERCEPTION: A STUDY USING GRANGER CAUSALITY ANALYSIS OF FUNCTIONAL MAGNETIC RESONANCE IMAGING DATA 
NeuroImage  2008;40(4):1807-1814.
Although it is accepted that visual cortical areas are recruited during touch, it remains uncertain whether this depends on top-down inputs mediating visual imagery or engagement of modality-independent representations by bottom-up somatosensory inputs. Here we addressed this by examining effective connectivity in humans during haptic perception of shape and texture with the right hand. Multivariate Granger causality analysis of functional magnetic resonance imaging (fMRI) data was conducted on a network of regions that were shape- or texture-selective. A novel network reduction procedure was employed to eliminate connections that did not contribute significantly to overall connectivity. Effective connectivity during haptic perception was found to involve a variety of interactions between areas generally regarded as somatosensory, multisensory, visual and motor, emphasizing flexible cooperation between different brain regions rather than rigid functional separation. The left postcentral sulcus (PCS), left precentral gyrus and right posterior insula were important sources of connections in the network. Bottom-up somatosensory inputs from the left PCS and right posterior insula fed into visual cortical areas, both the shape-selective right lateral occipital complex (LOC) and the texture-selective right medial occipital cortex (probable V2). In addition, top-down inputs from left postero-supero-medial parietal cortex influenced the right LOC. Thus, there is strong evidence for the bottom-up somatosensory inputs predicted by models of visual cortical areas as multisensory processors and suggestive evidence for top-down parietal (but not prefrontal) inputs that could mediate visual imagery. This is consistent with modality-independent representations accessible through both bottom-up sensory inputs and top-down processes such as visual imagery.
doi:10.1016/j.neuroimage.2008.01.044
PMCID: PMC2483676  PMID: 18329290
11.  Top-Down but Not Bottom-Up Visual Scanning is Affected in Hereditary Pure Cerebellar Ataxia 
PLoS ONE  2014;9(12):e116181.
The aim of this study was to clarify the nature of visual processing deficits caused by cerebellar disorders. We studied the performance of two types of visual search (top-down visual scanning and bottom-up visual scanning) in 18 patients with pure cerebellar types of spinocerebellar degeneration (SCA6: 11; SCA31: 7). The gaze fixation position was recorded with an eye-tracking device while the subjects performed two visual search tasks in which they looked for a target Landolt figure among distractors. In the serial search task, the target was similar to the distractors and the subject had to search for the target by processing each item with top-down visual scanning. In the pop-out search task, the target and distractor were clearly discernible and the visual salience of the target allowed the subjects to detect it by bottom-up visual scanning. The saliency maps clearly showed that the serial search task required top-down visual attention and the pop-out search task required bottom-up visual attention. In the serial search task, the search time to detect the target was significantly longer in SCA patients than in normal subjects, whereas the search time in the pop-out search task was comparable between the two groups. These findings suggested that SCA patients cannot efficiently scan a target using a top-down attentional process, whereas scanning with a bottom-up attentional process is not affected. In the serial search task, the amplitude of saccades was significantly smaller in SCA patients than in normal subjects. The variability of saccade amplitude (saccadic dysmetria), number of re-fixations, and unstable fixation (nystagmus) were larger in SCA patients than in normal subjects, accounting for a substantial proportion of scattered fixations around the items. Saccadic dysmetria, re-fixation, and nystagmus may play important roles in the impaired top-down visual scanning in SCA, hampering precise visual processing of individual items.
doi:10.1371/journal.pone.0116181
PMCID: PMC4278854  PMID: 25545148
12.  Cognitive programs: software for attention's executive 
Frontiers in Psychology  2014;5:1260.
What are the computational tasks that an executive controller for visual attention must solve? This question is posed in the context of the Selective Tuning model of attention. The range of required computations go beyond top-down bias signals or region-of-interest determinations, and must deal with overt and covert fixations, process timing and synchronization, information routing, memory, matching control to task, spatial localization, priming, and coordination of bottom-up with top-down information. During task execution, results must be monitored to ensure the expected results. This description includes the kinds of elements that are common in the control of any kind of complex machine or system. We seek a mechanistic integration of the above, in other words, algorithms that accomplish control. Such algorithms operate on representations, transforming a representation of one kind into another, which then forms the input to yet another algorithm. Cognitive Programs (CPs) are hypothesized to capture exactly such representational transformations via stepwise sequences of operations. CPs, an updated and modernized offspring of Ullman's Visual Routines, impose an algorithmic structure to the set of attentional functions and play a role in the overall shaping of attentional modulation of the visual system so that it provides its best performance. This requires that we consider the visual system as a dynamic, yet general-purpose processor tuned to the task and input of the moment. This differs dramatically from the almost universal cognitive and computational views, which regard vision as a passively observing module to which simple questions about percepts can be posed, regardless of task. Differing from Visual Routines, CPs explicitly involve the critical elements of Visual Task Executive (vTE), Visual Attention Executive (vAE), and Visual Working Memory (vWM). Cognitive Programs provide the software that directs the actions of the Selective Tuning model of visual attention.
doi:10.3389/fpsyg.2014.01260
PMCID: PMC4243492  PMID: 25505430
visual attention; executive; visual routines; working memory; selective tuning
13.  Rethinking the Role of Top-Down Attention in Vision: Effects Attributable to a Lossy Representation in Peripheral Vision 
According to common wisdom in the field of visual perception, top-down selective attention is required in order to bind features into objects. In this view, even simple tasks, such as distinguishing a rotated T from a rotated L, require selective attention since they require feature binding. Selective attention, in turn, is commonly conceived as involving volition, intention, and at least implicitly, awareness. There is something non-intuitive about the notion that we might need so expensive (and possibly human) a resource as conscious awareness in order to perform so basic a function as perception. In fact, we can carry out complex sensorimotor tasks, seemingly in the near absence of awareness or volitional shifts of attention (“zombie behaviors”). More generally, the tight association between attention and awareness, and the presumed role of attention on perception, is problematic. We propose that under normal viewing conditions, the main processes of feature binding and perception proceed largely independently of top-down selective attention. Recent work suggests that there is a significant loss of information in early stages of visual processing, especially in the periphery. In particular, our texture tiling model (TTM) represents images in terms of a fixed set of “texture” statistics computed over local pooling regions that tile the visual input. We argue that this lossy representation produces the perceptual ambiguities that have previously been as ascribed to a lack of feature binding in the absence of selective attention. At the same time, the TTM representation is sufficiently rich to explain performance in such complex tasks as scene gist recognition, pop-out target search, and navigation. A number of phenomena that have previously been explained in terms of voluntary attention can be explained more parsimoniously with the TTM. In this model, peripheral vision introduces a specific kind of information loss, and the information available to an observer varies greatly depending upon shifts of the point of gaze (which usually occur without awareness). The available information, in turn, provides a key determinant of the visual system’s capabilities and deficiencies. This scheme dissociates basic perceptual operations, such as feature binding, from both top-down attention and conscious awareness.
doi:10.3389/fpsyg.2012.00013
PMCID: PMC3272623  PMID: 22347200
selective attention; limited capacity; search; scene perception; model; peripheral vision; compression
14.  Modeling the Effect of Selection History on Pop-Out Visual Search 
PLoS ONE  2014;9(3):e89996.
While attentional effects in visual selection tasks have traditionally been assigned “top-down” or “bottom-up” origins, more recently it has been proposed that there are three major factors affecting visual selection: (1) physical salience, (2) current goals and (3) selection history. Here, we look further into selection history by investigating Priming of Pop-out (POP) and the Distractor Preview Effect (DPE), two inter-trial effects that demonstrate the influence of recent history on visual search performance. Using the Ratcliff diffusion model, we model observed saccadic selections from an oddball search experiment that included a mix of both POP and DPE conditions. We find that the Ratcliff diffusion model can effectively model the manner in which selection history affects current attentional control in visual inter-trial effects. The model evidence shows that bias regarding the current trial's most likely target color is the most critical parameter underlying the effect of selection history. Our results are consistent with the view that the 3-item color-oddball task used for POP and DPE experiments is best understood as an attentional decision making task.
doi:10.1371/journal.pone.0089996
PMCID: PMC3940711  PMID: 24595032
15.  Psychoanatomical substrates of Bálint's syndrome 
Objectives: From a series of glimpses, we perceive a seamless and richly detailed visual world. Cerebral damage, however, can destroy this illusion. In the case of Bálint's syndrome, the visual world is perceived erratically, as a series of single objects. The goal of this review is to explore a range of psychological and anatomical explanations for this striking visual disorder and to propose new directions for interpreting the findings in Bálint's syndrome and related cerebral disorders of visual processing.
Methods: Bálint's syndrome is reviewed in the light of current concepts and methodologies of vision research.
Results: The syndrome affects visual perception (causing simultanagnosia/visual disorientation) and visual control of eye and hand movement (causing ocular apraxia and optic ataxia). Although it has been generally construed as a biparietal syndrome causing an inability to see more than one object at a time, other lesions and mechanisms are also possible. Key syndrome components are dissociable and comprise a range of disturbances that overlap the hemineglect syndrome. Inouye's observations in similar cases, beginning in 1900, antedated Bálint's initial report. Because Bálint's syndrome is not common and is difficult to assess with standard clinical tools, the literature is dominated by case reports and confounded by case selection bias, non-uniform application of operational definitions, inadequate study of basic vision, poor lesion localisation, and failure to distinguish between deficits in the acute and chronic phases of recovery.
Conclusions: Studies of Bálint's syndrome have provided unique evidence on neural substrates for attention, perception, and visuomotor control. Future studies should address possible underlying psychoanatomical mechanisms at "bottom up" and "top down" levels, and should specifically consider visual working memory and attention (including object based attention) as well as systems for identification of object structure and depth from binocular stereopsis, kinetic depth, motion parallax, eye movement signals, and other cues.
doi:10.1136/jnnp.72.2.162
PMCID: PMC1737727  PMID: 11796765
16.  A distributed neural system for top-down face processing 
Neuroscience letters  2008;451(1):6-10.
Evidence suggests that the neural system associated with face processing is a distributed cortical network containing both bottom-up and top-down mechanisms. While bottom-up face processing has been the focus of many studies, the neural areas involved in the top-down face processing have not been extensively investigated due to difficulty in isolating top-down influences from the bottom-up response engendered by presentation of a face. In the present study, we used a novel experimental method to induce illusory face detection. This method allowed for directly examining the neural systems involved in top-down face processing while minimizing the influence of bottom-up perceptual input. A distributed cortical network of top-down face processing was identified by analyzing the functional connectivity patterns of the right fusiform face area (FFA). This distributed cortical network model for face processing includes both “core” and “extended” face processing areas. It also includes left anterior cingulate cortex (ACC), bilateral orbitofrontal cortex (OFC), left dorsolateral prefrontal cortex (DLPFC), left premotor cortex, and left inferior parietal cortex. These findings suggest that top-down face processing contains not only regions for analyzing the visual appearance of faces, but also those involved in processing low spatial frequency (LSF) information, decision making, and working memory.
doi:10.1016/j.neulet.2008.12.039
PMCID: PMC2634849  PMID: 19121364
top-down processing; psychophysiological interaction (PPI); distributed cortical network; fMRI; face processing
17.  Developmental Changes in Natural Viewing Behavior: Bottom-Up and Top-Down Differences between Children, Young Adults and Older Adults 
Despite the growing interest in fixation selection under natural conditions, there is a major gap in the literature concerning its developmental aspects. Early in life, bottom-up processes, such as local image feature – color, luminance contrast etc. – guided viewing, might be prominent but later overshadowed by more top-down processing. Moreover, with decline in visual functioning in old age, bottom-up processing is known to suffer. Here we recorded eye movements of 7- to 9-year-old children, 19- to 27-year-old adults, and older adults above 72 years of age while they viewed natural and complex images before performing a patch-recognition task. Task performance displayed the classical inverted U-shape, with young adults outperforming the other age groups. Fixation discrimination performance of local feature values dropped with age. Whereas children displayed the highest feature values at fixated points, suggesting a bottom-up mechanism, older adult viewing behavior was less feature-dependent, reminiscent of a top-down strategy. Importantly, we observed a double dissociation between children and elderly regarding the effects of active viewing on feature-related viewing: Explorativeness correlated with feature-related viewing negatively in young age, and positively in older adults. The results indicate that, with age, bottom-up fixation selection loses strength and/or the role of top-down processes becomes more important. Older adults who increase their feature-related viewing by being more explorative make use of this low-level information and perform better in the task. The present study thus reveals an important developmental change in natural and task-guided viewing.
doi:10.3389/fpsyg.2010.00207
PMCID: PMC3153813  PMID: 21833263
age differences; development; overt attention; natural scenes; eye movements
18.  The Timing of Vision – How Neural Processing Links to Different Temporal Dynamics 
In this review, we describe our recent attempts to model the neural correlates of visual perception with biologically inspired networks of spiking neurons, emphasizing the dynamical aspects. Experimental evidence suggests distinct processing modes depending on the type of task the visual system is engaged in. A first mode, crucial for object recognition, deals with rapidly extracting the glimpse of a visual scene in the first 100 ms after its presentation. The promptness of this process points to mainly feedforward processing, which relies on latency coding, and may be shaped by spike timing-dependent plasticity (STDP). Our simulations confirm the plausibility and efficiency of such a scheme. A second mode can be engaged whenever one needs to perform finer perceptual discrimination through evidence accumulation on the order of 400 ms and above. Here, our simulations, together with theoretical considerations, show how predominantly local recurrent connections and long neural time-constants enable the integration and build-up of firing rates on this timescale. In particular, we review how a non-linear model with attractor states induced by strong recurrent connectivity provides straightforward explanations for several recent experimental observations. A third mode, involving additional top-down attentional signals, is relevant for more complex visual scene processing. In the model, as in the brain, these top-down attentional signals shape visual processing by biasing the competition between different pools of neurons. The winning pools may not only have a higher firing rate, but also more synchronous oscillatory activity. This fourth mode, oscillatory activity, leads to faster reaction times and enhanced information transfers in the model. This has indeed been observed experimentally. Moreover, oscillatory activity can format spike times and encode information in the spike phases with respect to the oscillatory cycle. This phenomenon is referred to as “phase-of-firing coding,” and experimental evidence for it is accumulating in the visual system. Simulations show that this code can again be efficiently decoded by STDP. Future work should focus on continuous natural vision, bio-inspired hardware vision systems, and novel experimental paradigms to further distinguish current modeling approaches.
doi:10.3389/fpsyg.2011.00151
PMCID: PMC3129241  PMID: 21747774
vision; attention; spiking neurons; neurodynamics; oscillations; STDP; neural coding; decision making
19.  Where’s Waldo? How perceptual, cognitive, and emotional brain processes cooperate during learning to categorize and find desired objects in a cluttered scene 
The Where’s Waldo problem concerns how individuals can rapidly learn to search a scene to detect, attend, recognize, and look at a valued target object in it. This article develops the ARTSCAN Search neural model to clarify how brain mechanisms across the What and Where cortical streams are coordinated to solve the Where’s Waldo problem. The What stream learns positionally-invariant object representations, whereas the Where stream controls positionally-selective spatial and action representations. The model overcomes deficiencies of these computationally complementary properties through What and Where stream interactions. Where stream processes of spatial attention and predictive eye movement control modulate What stream processes whereby multiple view- and positionally-specific object categories are learned and associatively linked to view- and positionally-invariant object categories through bottom-up and attentive top-down interactions. Gain fields control the coordinate transformations that enable spatial attention and predictive eye movements to carry out this role. What stream cognitive-emotional learning processes enable the focusing of motivated attention upon the invariant object categories of desired objects. What stream cognitive names or motivational drives can prime a view- and positionally-invariant object category of a desired target object. A volitional signal can convert these primes into top-down activations that can, in turn, prime What stream view- and positionally-specific categories. When it also receives bottom-up activation from a target, such a positionally-specific category can cause an attentional shift in the Where stream to the positional representation of the target, and an eye movement can then be elicited to foveate it. These processes describe interactions among brain regions that include visual cortex, parietal cortex, inferotemporal cortex, prefrontal cortex (PFC), amygdala, basal ganglia (BG), and superior colliculus (SC).
doi:10.3389/fnint.2014.00043
PMCID: PMC4060746  PMID: 24987339
visual search; Where’s Waldo problem; spatial attention; object attention; category learning; gain field; reinforcement learning; eye movement
20.  Selective biasing of a specific bistable-figure percept involves fMRI signal changes in frontostriatal circuits 
Attention, suggestion, context and expectation can all exert top-down influence on bottom-up processes (e.g., stimulus-driven mechanisms). Identifying the functional neuroanatomy that subserves top-down influences on sensory information processing can unlock the neural substrates of how suggestion can modulate behavior. Using functional magnetic resonance imaging (fMRI), we scanned 10 healthy participants (five men) viewing five bistable figures. Participants received a directional cue to perceive a particular spatial orientation a few seconds before the bistable figure appeared. After presentation, participants pressed a button to indicate their locking into the one desired orientation of the two possible interpretations. Participants additionally performed tests of impulse control and sustained attention. Our findings reveal the role of specific frontostriatal structures in selecting a particular orientation for bistable figures, including dorsolateral prefrontal regions and the putamen. Additional contrasts further bolstered the role of the frontostriatal system m the top-down processing of competing visual perceptions. Separate correlations of behavioral variables with fMRI activations support the idea that the frontostriatal system may mediate attentional control when selecting among competing visual perceptions. These results may generalize to other psychological functions. With special relevance to clinical neuroscience and applications involving attention, expectation and suggestion (e.g., hypnosis), our results address the importance of frontostriatal circuitry in behavioral modulation.
PMCID: PMC2386759  PMID: 18030926
Top-down effect; cognitive control; coruco-striato-thalamocortical (CSTC) circuits; attention; expectation; hypnosis; self-regulation; impulse control
21.  What Guides Visual Overt Attention under Natural Conditions? Past and Future Research 
ISRN Neuroscience  2013;2013:868491.
In the last decade, overt attention under natural conditions became a prominent topic in neuroscientific and psychological research. In this context, one central question is “what guides the direction of gaze on complex visual scenes?” In the present review recent research on bottom-up influences on overt attention is presented first. Against this background, strengths and limitations of the bottom-up approach are discussed and future directions in this field are outlined. In addition to that, the current scope on top-down factors in visual attention is enlarged by discussing the impact of emotions and motivational tendencies on viewing behavior. Overall, this review highlights how behavioral and neurophysiological research on overt attention can benefit from a broader scope on influential factors in visual attention.
doi:10.1155/2013/868491
PMCID: PMC4045567  PMID: 24959568
22.  Behavioral biases when viewing multiplexed scenes: scene structure and frames of reference for inspection 
Where people look when viewing a scene has been a much explored avenue of vision research (e.g., see Tatler, 2009). Current understanding of eye guidance suggests that a combination of high and low-level factors influence fixation selection (e.g., Torralba et al., 2006), but that there are also strong biases toward the center of an image (Tatler, 2007). However, situations where we view multiplexed scenes are becoming increasingly common, and it is unclear how visual inspection might be arranged when content lacks normal semantic or spatial structure. Here we use the central bias to examine how gaze behavior is organized in scenes that are presented in their normal format, or disrupted by scrambling the quadrants and separating them by space. In Experiment 1, scrambling scenes had the strongest influence on gaze allocation. Observers were highly biased by the quadrant center, although physical space did not enhance this bias. However, the center of the display still contributed to fixation selection above chance, and was most influential early in scene viewing. When the top left quadrant was held constant across all conditions in Experiment 2, fixation behavior was significantly influenced by the overall arrangement of the display, with fixations being biased toward the quadrant center when the other three quadrants were scrambled (despite the visual information in this quadrant being identical in all conditions). When scenes are scrambled into four quadrants and semantic contiguity is disrupted, observers no longer appear to view the content as a single scene (despite it consisting of the same visual information overall), but rather anchor visual inspection around the four separate “sub-scenes.” Moreover, the frame of reference that observers use when viewing the multiplex seems to change across viewing time: from an early bias toward the display center to a later bias toward quadrant centers.
doi:10.3389/fpsyg.2013.00624
PMCID: PMC3781347  PMID: 24069008
scene viewing; scene structure; central bias; multiplex; frames of reference
23.  The Contributions of Image Content and Behavioral Relevancy to Overt Attention 
PLoS ONE  2014;9(4):e93254.
During free-viewing of natural scenes, eye movements are guided by bottom-up factors inherent to the stimulus, as well as top-down factors inherent to the observer. The question of how these two different sources of information interact and contribute to fixation behavior has recently received a lot of attention. Here, a battery of 15 visual stimulus features was used to quantify the contribution of stimulus properties during free-viewing of 4 different categories of images (Natural, Urban, Fractal and Pink Noise). Behaviorally relevant information was estimated in the form of topographical interestingness maps by asking an independent set of subjects to click at image regions that they subjectively found most interesting. Using a Bayesian scheme, we computed saliency functions that described the probability of a given feature to be fixated. In the case of stimulus features, the precise shape of the saliency functions was strongly dependent upon image category and overall the saliency associated with these features was generally weak. When testing multiple features jointly, a linear additive integration model of individual saliencies performed satisfactorily. We found that the saliency associated with interesting locations was much higher than any low-level image feature and any pair-wise combination thereof. Furthermore, the low-level image features were found to be maximally salient at those locations that had already high interestingness ratings. Temporal analysis showed that regions with high interestingness ratings were fixated as early as the third fixation following stimulus onset. Paralleling these findings, fixation durations were found to be dependent mainly on interestingness ratings and to a lesser extent on the low-level image features. Our results suggest that both low- and high-level sources of information play a significant role during exploration of complex scenes with behaviorally relevant information being more effective compared to stimulus features.
doi:10.1371/journal.pone.0093254
PMCID: PMC3988016  PMID: 24736751
24.  The meaning of quality work from the general practitioner's perspective: an interview study 
BMC Family Practice  2006;7:60.
Background
The quality of health care and its costs have been a subject of considerable attention and lively discussion. Various methods have been introduced to measure, assess, and improve the quality of health care. Many professionals in health care have criticized quality work and its methods as being unsuitable for health care. The aim of the study was to obtain a deeper understanding of the meaning of quality work from the general practitioner's perspective.
Methods
Fourteen general practitioners, seven women and seven men, were interviewed with the aid of a semi-structured interview guide about their experience of quality work. The interviews were tape-recorded and transcribed verbatim. Data collection and analysis were guided by a phenomenological approach intended to capture the essence of the statements.
Results
Two fundamentally different ways to view quality work emerged from the statements: A pronounced top-down perspective with elements of control, and an intra-profession or bottom-up perspective. From the top-down perspective, quality work was described as something that infringes professional freedom. From the bottom-up perspective the statements described quality work as a self-evident duty and as a professional attitude to the medical vocation, guided by the principles of medical ethics. Follow-up with a bottom-up approach is best done in internal processes, with the profession itself designing structures and methods based on its own needs.
Conclusions
The study indicates that general practitioners view internal follow-up as a professional obligation but external control as an imposition. This opposition entails a difficulty in achieving systematism in follow-up and quality work in health care. If the statutory standards for systematic quality work are to gain a real foothold, they must be packaged in such a way that general practitioners feel that both perspectives can be reconciled.
doi:10.1186/1471-2296-7-60
PMCID: PMC1624837  PMID: 17052342
25.  Frontal–Occipital Connectivity During Visual Search 
Brain Connectivity  2012;2(3):164-175.
Abstract
Although expectation- and attention-related interactions between ventral and medial prefrontal cortex and stimulus category-selective visual regions have been identified during visual detection and discrimination, it is not known if similar neural mechanisms apply to other tasks such as visual search. The current work tested the hypothesis that high-level frontal regions, previously implicated in expectation and visual imagery of object categories, interact with visual regions associated with object recognition during visual search. Using functional magnetic resonance imaging, subjects searched for a specific object that varied in size and location within a complex natural scene. A model-free, spatial-independent component analysis isolated multiple task-related components, one of which included visual cortex, as well as a cluster within ventromedial prefrontal cortex (vmPFC), consistent with the engagement of both top-down and bottom-up processes. Analyses of psychophysiological interactions showed increased functional connectivity between vmPFC and object-sensitive lateral occipital cortex (LOC), and results from dynamic causal modeling and Bayesian Model Selection suggested bidirectional connections between vmPFC and LOC that were positively modulated by the task. Using image-guided diffusion-tensor imaging, functionally seeded, probabilistic white-matter tracts between vmPFC and LOC, which presumably underlie this effective interconnectivity, were also observed. These connectivity findings extend previous models of visual search processes to include specific frontal–occipital neuronal interactions during a natural and complex search task.
doi:10.1089/brain.2012.0072
PMCID: PMC3621345  PMID: 22708993
diffusion tensor imaging; dynamic causal modeling; fMRI; independent component analysis; lateral occipital cortex; object detection; ventromedial prefrontal cortex; visual search

Results 1-25 (1025170)