Search tips
Search criteria

Results 1-25 (1046763)

Clipboard (0)

Related Articles

1.  Reinterpreting Behavioral Receptive Fields: Lightness Induction Alters Visually Completed Shape 
PLoS ONE  2013;8(6):e62505.
A classification image (CI) technique has shown that static luminance noise near visually completed contours affects the discrimination of fat and thin Kanizsa shapes. These influential noise regions were proposed to reveal “behavioral receptive fields” of completed contours–the same regions to which early cortical cells respond in neurophysiological studies of contour completion. Here, we hypothesized that 1) influential noise regions correspond to the surfaces that distinguish fat and thin shapes (hereafter, key regions); and 2) key region noise biases a “fat” response to the extent that its contrast polarity (lighter or darker than background) matches the shape's filled-in surface color.
To test our hypothesis, we had observers discriminate fat and thin noise-embedded rectangles that were defined by either illusory or luminance-defined contours (Experiment 1). Surrounding elements (“inducers”) caused the shapes to appear either lighter or darker than the background–a process sometimes referred to as lightness induction. For both illusory and luminance-defined rectangles, key region noise biased a fat response to the extent that its contrast polarity (light or dark) matched the induced surface color. When lightness induction was minimized, luminance noise had no consistent influence on shape discrimination. This pattern arose when pixels immediately adjacent to the discriminated boundaries were excluded from the analysis (Experiment 2) and also when the noise was restricted to the key regions so that the noise never overlapped with the physically visible edges (Experiment 3). The lightness effects did not occur in the absence of enclosing boundaries (Experiment 4).
Under noisy conditions, lightness induction alters visually completed shape. Moreover, behavioral receptive fields derived in CI studies do not correspond to contours per se but to filled-in surface regions contained by those contours. The relevance of lightness to two-dimensional shape completion supplies a new constraint for models of object perception.
PMCID: PMC3672097  PMID: 23750200
2.  Faces in Places: Humans and Machines Make Similar Face Detection Errors 
PLoS ONE  2011;6(10):e25373.
The human visual system seems to be particularly efficient at detecting faces. This efficiency sometimes comes at the cost of wrongfully seeing faces in arbitrary patterns, including famous examples such as a rock configuration on Mars or a toast's roast patterns. In machine vision, face detection has made considerable progress and has become a standard feature of many digital cameras. The arguably most wide-spread algorithm for such applications (“Viola-Jones” algorithm) achieves high detection rates at high computational efficiency. To what extent do the patterns that the algorithm mistakenly classifies as faces also fool humans? We selected three kinds of stimuli from real-life, first-person perspective movies based on the algorithm's output: correct detections (“real faces”), false positives (“illusory faces”) and correctly rejected locations (“non faces”). Observers were shown pairs of these for 20 ms and had to direct their gaze to the location of the face. We found that illusory faces were mistaken for faces more frequently than non faces. In addition, rotation of the real face yielded more errors, while rotation of the illusory face yielded fewer errors. Using colored stimuli increases overall performance, but does not change the pattern of results. When replacing the eye movement by a manual response, however, the preference for illusory faces over non faces disappeared. Taken together, our data show that humans make similar face-detection errors as the Viola-Jones algorithm, when directing their gaze to briefly presented stimuli. In particular, the relative spatial arrangement of oriented filters seems of relevance. This suggests that efficient face detection in humans is likely to be pre-attentive and based on rather simple features as those encoded in the early visual system.
PMCID: PMC3187842  PMID: 21998653
3.  Neural Correlates of Top-Down Letter Processing 
Neuropsychologia  2009;48(2):636.
This fMRI study investigated top-down letter processing with an illusory letter detection task. Participants responded whether one of a number of different possible letters was present in a very noisy image. After initial training that became increasingly difficult, they continued to detect letters even though the images consisted of pure noise, which eliminated contamination from strong bottom-up input. For illusory letter detection, greater fMRI activation was observed in several cortical regions. These regions included the precuneus, an area generally involved in top-down processing of objects, and the left superior parietal lobule, an area previously identified with the processing of valid letter and word stimuli. In addition, top-down letter detection also activated the left inferior frontal gyrus, an area that may be involved in the integration of general top-down processing and letter-specific bottom-up processing. These findings suggest that these regions may play a significant role in top-down as well as bottom up processing of letters and words, and are likely to have reciprocal functional connections to more posterior regions in the word and letter processing network.
PMCID: PMC2814001  PMID: 19883666
word processing; letter processing; top-down processing; fMRI
4.  Effective connectivities of cortical regions for top-down face processing: A Dynamic Causal Modeling study 
Brain research  2010;1340:40-51.
To study top-down face processing, the present study used an experimental paradigm in which participants detected non-existent faces in pure noise images. Conventional BOLD signal analysis identified three regions involved in this illusory face detection. These regions included the left orbitofrontal cortex (OFC) in addition to the right fusiform face area (FFA) and right occipital face area (OFA), both of which were previously known to be involved in both top-down and bottom-up processing of faces. We used Dynamic Causal Modeling (DCM) and Bayesian model selection to further analyze the data, revealing both intrinsic and modulatory effective connectivities among these three cortical regions. Specifically, our results support the claim that the orbitofrontal cortex plays a crucial role in the top-down processing of faces by regulating the activities of the occipital face area, and the occipital face area in turn detects the illusory face features in the visual stimuli and then provides this information to the fusiform face area for further analysis.
PMCID: PMC3724518  PMID: 20423709
Face processing; Top-down processing; Bottom-up processing; Dynamic Causal Modeling (DCM); Orbitofrontal cortex (OFC)
5.  Early processing in human LOC is highly responsive to illusory contours but not to salient regions 
The European journal of neuroscience  2009;30(10):2018-2028.
Human electrophysiological studies support a model whereby sensitivity to so-called illusory contour stimuli is first seen within the lateral occipital complex. A challenge to this model posits that the lateral occipital complex is a general site for crude region-based segmentation, based on findings of equivalent hemodynamic activations in the lateral occipital complex to illusory contour and so-called salient region stimuli, a stimulus class that lacks the classic bounding contours of illusory contours. Using high-density electrical mapping of visual evoked potentials, we show that early lateral occipital cortex activity is substantially stronger to illusory contour than to salient region stimuli, while later lateral occipital complex activity is stronger to salient region than to illusory contour stimuli. Our results suggest that equivalent hemodynamic activity to illusory contour and salient region stimuli likely reflects temporally integrated responses, a result of the poor temporal resolution of hemodynamic imaging. The temporal precision of visual evoked potentials is critical for establishing viable models of completion processes and visual scene analysis. We propose that crude spatial segmentation analyses, which are insensitive to illusory contours, occur first within dorsal visual regions, not lateral occipital complex, and that initial illusory contour sensitivity is a function of the lateral occipital complex.
PMCID: PMC3224794  PMID: 19895562
object recognition; event-related potentials; ERP; filling-in
6.  Hearing Lips and Seeing Voices: How Cortical Areas Supporting Speech Production Mediate Audiovisual Speech Perception 
Observing a speaker’s mouth profoundly influences speech perception. For example, listeners perceive an “illusory” “ta” when the video of a face producing /ka/ is dubbed onto an audio /pa/. Here, we show how cortical areas supporting speech production mediate this illusory percept and audiovisual (AV) speech perception more generally. Specifically, cortical activity during AV speech perception occurs in many of the same areas that are active during speech production. We find that different perceptions of the same syllable and the perception of different syllables are associated with different distributions of activity in frontal motor areas involved in speech production. Activity patterns in these frontal motor areas resulting from the illusory “ta” percept are more similar to the activity patterns evoked by AV/ta/ than they are to patterns evoked by AV/pa/ or AV/ka/. In contrast to the activity in frontal motor areas, stimulus-evoked activity for the illusory “ta” in auditory and somatosensory areas and visual areas initially resembles activity evoked by AV/pa/ and AV/ka/, respectively. Ultimately, though, activity in these regions comes to resemble activity evoked by AV/ta/. Together, these results suggest that AV speech elicits in the listener a motor plan for the production of the phoneme that the speaker might have been attempting to produce, and that feedback in the form of efference copy from the motor system ultimately influences the phonetic interpretation.
PMCID: PMC2896890  PMID: 17218482
audiovisual speech perception; efference copy; McGurk effect; mirror system; motor system; prediction
7.  Orientation-selective adaptation to illusory contours in human visual cortex 
Humans can perceive illusory or subjective contours in the absence of any real physical boundaries. We used an adaptation protocol to look for orientation-selective neural responses to illusory contours defined by phase-shifted abutting line gratings in the human visual cortex. We measured fMRI responses to illusory-contour test stimuli after adapting to an illusory-contour adapter stimulus that was oriented parallel or orthogonal to the test stimulus. We found orientation-selective adaptation to illusory contours in early (V1 and V2) and higher-tier visual areas (V3, hV4, VO1, V3A/B, V7, LO1, LO2). That is, fMRI responses were smaller for test stimuli parallel to the adapter than for test stimuli orthogonal to the adapter. In two control experiments using spatially jittered and phase-randomized stimuli, we demonstrated that this adaptation was not just in response to differences in the distribution of spectral power in the stimuli. Orientation-selective adaptation to illusory contours increased from early to higher-tier visual areas. Thus, both early and higher-tier visual areas contain neurons selective for the orientation of this type of illusory contour.
PMCID: PMC2728022  PMID: 17329415
illusory contours; adaptation; orientation selectivity; fMRI; visual cortex
8.  Crowding Changes Appearance 
Current Biology  2010;20(6):496-501.
Crowding is the breakdown in object recognition that occurs in cluttered visual environments [1–4] and the fundamental limit on peripheral vision, affecting identification within many visual modalities [5–9] and across large spatial regions [10]. Though frequently characterized as a disruptive process through which object representations are suppressed [11, 12] or lost altogether [13–15], we demonstrate that crowding systematically changes the appearance of objects. In particular, target patches of visual noise that are surrounded (“crowded”) by oriented Gabor flankers become perceptually oriented, matching the flankers. This was established with a change-detection paradigm: under crowded conditions, target changes from noise to Gabor went unnoticed when the Gabor orientation matched the flankers (and the illusory target percept), despite being easily detected when they differed. Rotation of the flankers (leaving target noise unaltered) also induced illusory target rotations. Blank targets led to similar results, demonstrating that crowding can induce apparent structure where none exists. Finally, adaptation to these stimuli induced a tilt aftereffect at the target location, consistent with signals from the flankers “spreading” across space. These results confirm predictions from change-based models of crowding, such as averaging [16], and establish crowding as a regularization process that simplifies the peripheral field by promoting consistent appearance among adjacent objects.
Graphical Abstract
► Patches of visual noise become perceptually oriented when crowded by Gabor elements ► Changes from crowded-noise targets to Gabors go unnoticed when perceptually matched ► Adaptation to crowded change gives tilt aftereffects in the target location ► Crowding promotes consistency by regularizing the peripheral visual field
PMCID: PMC2849014  PMID: 20206527
9.  “I Look in Your Eyes, Honey”: Internal Face Features Induce Spatial Frequency Preference for Human Face Processing 
PLoS Computational Biology  2009;5(3):e1000329.
Numerous psychophysical experiments found that humans preferably rely on a narrow band of spatial frequencies for recognition of face identity. A recently conducted theoretical study by the author suggests that this frequency preference reflects an adaptation of the brain's face processing machinery to this specific stimulus class (i.e., faces). The purpose of the present study is to examine this property in greater detail and to specifically elucidate the implication of internal face features (i.e., eyes, mouth, and nose). To this end, I parameterized Gabor filters to match the spatial receptive field of contrast sensitive neurons in the primary visual cortex (simple and complex cells). Filter responses to a large number of face images were computed, aligned for internal face features, and response-equalized (“whitened”). The results demonstrate that the frequency preference is caused by internal face features. Thus, the psychophysically observed human frequency bias for face processing seems to be specifically caused by the intrinsic spatial frequency content of internal face features.
Author Summary
Imagine a photograph showing your friend's face. Although you might think that every single detail in his face matters for recognizing him, numerous experiments have shown that the brain prefers a rather coarse resolution instead. This means that a small rectangular photograph of about 30 to 40 pixels in width (showing only the face from left ear to right ear) is optimal. But why? To answer this question, I analyzed a large number of male and female face images. (The analysis was designed to mimic the way that the brain presumably processes them.) The analysis was carried out separately for each of the internal face features (left eye, right eye, mouth, and nose), which permits us to identify the responsible feature(s) for setting the resolution level, and it turns out that the eyes and the mouth are responsible for setting it. Thus, looking at eyes and mouth at the mentioned coarse resolution gives the most reliable signals for face recognition, and the brain has built-in knowledge about that. Although a preferred resolution level for face recognition has been observed for a long time in numerous experiments, this study offers, for the first time, a plausible explanation.
PMCID: PMC2653192  PMID: 19325870
10.  A dynamic causal modeling analysis of the effective connectivities underlying top-down letter processing 
Neuropsychologia  2011;49(5):10.1016/j.neuropsychologia.2011.01.011.
The present study employed Dynamic Causal Modeling to investigate the effective functional connectivity between regions of the neural network involved in top-down letter processing. We used an illusory letter detection paradigm in which participants detected letters while viewing pure noise images. When participants detected letters, the response of the right middle occipital gyrus (MOG) in the visual cortex was enhanced by increased feed-backward connectivity from the left inferior frontal gyrus (IFG). In addition, illusory letter detection increased feed-forward connectivity from the right MOG to the left inferior parietal lobules. Originating in the left IFG, this top-down letter processing network may facilitate the detection of letters by activating letter processing areas within the visual cortex. This activation in turns may highlight the visual features of letters and send letter information to activate the associated phonological representations in the identified parietal region.
PMCID: PMC3817006  PMID: 21237182
letter processing; word processing; top-down processing; fMRI; dynamic causal modeling
11.  Anomalous Motion Illusion Contributes to Visual Preference 
This study investigated the relationship between the magnitude of illusory motion in the variants of the “Rotating Snakes” pattern and the visual preference among such patterns. In Experiment 1 we manipulated the outer contour and the internal geometrical structure of the figure to test for corresponding modulations in the perceived illusion magnitude. The strength of illusory motion was estimated by the method of adjustment where the speed of a standard moving figure was matched to the speed of the perceived illusory motion in test figures. We observed modulation of the perceived strength of illusory motion congruent with our geometrical manipulations. In Experiment 2, we directly compared the magnitude of the perceived illusory motion and the preference for these patterns by a method of paired comparison. Images differing in illusion magnitude showed corresponding differences in the reported preference for these patterns. In addition, further analysis revealed that the geometry and lower level image characteristics also substantially contributed to the observed preference ratings. Together these results support the idea that presence of illusory effect and geometrical characteristics determine affective preference for images, as they may be regarded as more interesting, surprising, or fascinating.
PMCID: PMC3509719  PMID: 23226138
motion illusion; esthetic preference; illusion magnitude; geometry of patterns
12.  Music Alters Visual Perception 
PLoS ONE  2011;6(4):e18861.
Visual perception is not a passive process: in order to efficiently process visual input, the brain actively uses previous knowledge (e.g., memory) and expectations about what the world should look like. However, perception is not only influenced by previous knowledge. Especially the perception of emotional stimuli is influenced by the emotional state of the observer. In other words, how we perceive the world does not only depend on what we know of the world, but also by how we feel. In this study, we further investigated the relation between mood and perception.
Methods and Findings
We let observers do a difficult stimulus detection task, in which they had to detect schematic happy and sad faces embedded in noise. Mood was manipulated by means of music. We found that observers were more accurate in detecting faces congruent with their mood, corroborating earlier research. However, in trials in which no actual face was presented, observers made a significant number of false alarms. The content of these false alarms, or illusory percepts, was strongly influenced by the observers' mood.
As illusory percepts are believed to reflect the content of internal representations that are employed by the brain during top-down processing of visual input, we conclude that top-down modulation of visual processing is not purely predictive in nature: mood, in this case manipulated by music, may also directly alter the way we perceive the world.
PMCID: PMC3080883  PMID: 21533041
13.  Own-race and own-age biases facilitate visual awareness of faces under interocular suppression 
The detection of a face in a visual scene is the first stage in the face processing hierarchy. Although all subsequent, more elaborate face processing depends on the initial detection of a face, surprisingly little is known about the perceptual mechanisms underlying face detection. Recent evidence suggests that relatively hard-wired face detection mechanisms are broadly tuned to all face-like visual patterns as long as they respect the typical spatial configuration of the eyes above the mouth. Here, we qualify this notion by showing that face detection mechanisms are also sensitive to face shape and facial surface reflectance properties. We used continuous flash suppression (CFS) to render faces invisible at the beginning of a trial and measured the time upright and inverted faces needed to break into awareness. Young Caucasian adult observers were presented with faces from their own race or from another race (race experiment) and with faces from their own age group or from another age group (age experiment). Faces matching the observers’ own race and age group were detected more quickly. Moreover, the advantage of upright over inverted faces in overcoming CFS, i.e., the face inversion effect (FIE), was larger for own-race and own-age faces. These results demonstrate that differences in face shape and surface reflectance influence access to awareness and configural face processing at the initial detection stage. Although we did not collect data from observers of another race or age group, these findings are a first indication that face detection mechanisms are shaped by visual experience with faces from one’s own social group. Such experience-based fine-tuning of face detection mechanisms may equip in-group faces with a competitive advantage for access to conscious awareness.
PMCID: PMC4118029  PMID: 25136308
face perception; face detection; visual awareness; race; age; interocular suppression; continuous flash suppression
14.  A distributed neural system for top-down face processing 
Neuroscience letters  2008;451(1):6-10.
Evidence suggests that the neural system associated with face processing is a distributed cortical network containing both bottom-up and top-down mechanisms. While bottom-up face processing has been the focus of many studies, the neural areas involved in the top-down face processing have not been extensively investigated due to difficulty in isolating top-down influences from the bottom-up response engendered by presentation of a face. In the present study, we used a novel experimental method to induce illusory face detection. This method allowed for directly examining the neural systems involved in top-down face processing while minimizing the influence of bottom-up perceptual input. A distributed cortical network of top-down face processing was identified by analyzing the functional connectivity patterns of the right fusiform face area (FFA). This distributed cortical network model for face processing includes both “core” and “extended” face processing areas. It also includes left anterior cingulate cortex (ACC), bilateral orbitofrontal cortex (OFC), left dorsolateral prefrontal cortex (DLPFC), left premotor cortex, and left inferior parietal cortex. These findings suggest that top-down face processing contains not only regions for analyzing the visual appearance of faces, but also those involved in processing low spatial frequency (LSF) information, decision making, and working memory.
PMCID: PMC2634849  PMID: 19121364
top-down processing; psychophysiological interaction (PPI); distributed cortical network; fMRI; face processing
15.  Illusions of Visual Motion Elicited by Electrical Stimulation of Human MT Complex 
PLoS ONE  2011;6(7):e21798.
Human cortical area MT+ (hMT+) is known to respond to visual motion stimuli, but its causal role in the conscious experience of motion remains largely unexplored. Studies in non-human primates demonstrate that altering activity in area MT can influence motion perception judgments, but animal studies are inherently limited in assessing subjective conscious experience. In the current study, we use functional magnetic resonance imaging (fMRI), intracranial electrocorticography (ECoG), and electrical brain stimulation (EBS) in three patients implanted with intracranial electrodes to address the role of area hMT+ in conscious visual motion perception. We show that in conscious human subjects, reproducible illusory motion can be elicited by electrical stimulation of hMT+. These visual motion percepts only occurred when the site of stimulation overlapped directly with the region of the brain that had increased fMRI and electrophysiological activity during moving compared to static visual stimuli in the same individual subjects. Electrical stimulation in neighboring regions failed to produce illusory motion. Our study provides evidence for the sufficient causal link between the hMT+ network and the human conscious experience of visual motion. It also suggests a clear spatial relationship between fMRI signal and ECoG activity in the human brain.
PMCID: PMC3135604  PMID: 21765915
16.  A Motion Illusion Reveals Mechanisms of Perceptual Stabilization 
PLoS ONE  2008;3(7):e2741.
Visual illusions are valuable tools for the scientific examination of the mechanisms underlying perception. In the peripheral drift illusion special drift patterns appear to move although they are static. During fixation small involuntary eye movements generate retinal image slips which need to be suppressed for stable perception. Here we show that the peripheral drift illusion reveals the mechanisms of perceptual stabilization associated with these micromovements. In a series of experiments we found that illusory motion was only observed in the peripheral visual field. The strength of illusory motion varied with the degree of micromovements. However, drift patterns presented in the central (but not the peripheral) visual field modulated the strength of illusory peripheral motion. Moreover, although central drift patterns were not perceived as moving, they elicited illusory motion of neutral peripheral patterns. Central drift patterns modulated illusory peripheral motion even when micromovements remained constant. Interestingly, perceptual stabilization was only affected by static drift patterns, but not by real motion signals. Our findings suggest that perceptual instabilities caused by fixational eye movements are corrected by a mechanism that relies on visual rather than extraretinal (proprioceptive or motor) signals, and that drift patterns systematically bias this compensatory mechanism. These mechanisms may be revealed by utilizing static visual patterns that give rise to the peripheral drift illusion, but remain undetected with other patterns. Accordingly, the peripheral drift illusion is of unique value for examining processes of perceptual stabilization.
PMCID: PMC2453321  PMID: 18648651
17.  Internal representations for face detection – an application of noise-based image classification to BOLD responses 
Human brain mapping  2012;34(11):3101-3115.
What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations.
PMCID: PMC4204487  PMID: 22711230
face recognition; reverse correlation; fMRI
18.  Primary Visual Cortex Activity along the Apparent-Motion Trace Reflects Illusory Perception 
PLoS Biology  2005;3(8):e265.
The illusion of apparent motion can be induced when visual stimuli are successively presented at different locations. It has been shown in previous studies that motion-sensitive regions in extrastriate cortex are relevant for the processing of apparent motion, but it is unclear whether primary visual cortex (V1) is also involved in the representation of the illusory motion path. We investigated, in human subjects, apparent-motion-related activity in patches of V1 representing locations along the path of illusory stimulus motion using functional magnetic resonance imaging. Here we show that apparent motion caused a blood-oxygenation-level-dependent response along the V1 representations of the apparent-motion path, including regions that were not directly activated by the apparent-motion-inducing stimuli. This response was unaltered when participants had to perform an attention-demanding task that diverted their attention away from the stimulus. With a bistable motion quartet, we confirmed that the activity was related to the conscious perception of movement. Our data suggest that V1 is part of the network that represents the illusory path of apparent motion. The activation in V1 can be explained either by lateral interactions within V1 or by feedback mechanisms from higher visual areas, especially the motion-sensitive human MT/V5 complex.
Using fMRI in humans, the authors reveal a clear role for V1 cortex in forming an illusory perception of motion when stationary stimuli are successively flashed in different locations.
PMCID: PMC1175820  PMID: 16018720
19.  Diagnostic Features of Emotional Expressions Are Processed Preferentially 
PLoS ONE  2012;7(7):e41792.
Diagnostic features of emotional expressions are differentially distributed across the face. The current study examined whether these diagnostic features are preferentially attended to even when they are irrelevant for the task at hand or when faces appear at different locations in the visual field. To this aim, fearful, happy and neutral faces were presented to healthy individuals in two experiments while measuring eye movements. In Experiment 1, participants had to accomplish an emotion classification, a gender discrimination or a passive viewing task. To differentiate fast, potentially reflexive, eye movements from a more elaborate scanning of faces, stimuli were either presented for 150 or 2000 ms. In Experiment 2, similar faces were presented at different spatial positions to rule out the possibility that eye movements only reflect a general bias for certain visual field locations. In both experiments, participants fixated the eye region much longer than any other region in the face. Furthermore, the eye region was attended to more pronouncedly when fearful or neutral faces were shown whereas more attention was directed toward the mouth of happy facial expressions. Since these results were similar across the other experimental manipulations, they indicate that diagnostic features of emotional expressions are preferentially processed irrespective of task demands and spatial locations. Saliency analyses revealed that a computational model of bottom-up visual attention could not explain these results. Furthermore, as these gaze preferences were evident very early after stimulus onset and occurred even when saccades did not allow for extracting further information from these stimuli, they may reflect a preattentive mechanism that automatically detects relevant facial features in the visual field and facilitates the orientation of attention towards them. This mechanism might crucially depend on amygdala functioning and it is potentially impaired in a number of clinical conditions such as autism or social anxiety disorders.
PMCID: PMC3405011  PMID: 22848607
20.  Angle Alignment Evokes Perceived Depth and Illusory Surfaces 
Perception  2008;37(10):1471-1487.
There is a distinct visual process that triggers the perception of illusory surfaces and contours along the intersections of aligned, zigzag line patterns. Such illusory contours and surfaces are qualitatively different from illusory contours of the Kanizsa-type. The illusory contours and surfaces in this case are not the product of occlusion and do not imply occlusion of one surface by another. Rather the aligned angles in the patterns are combined by the visual system into the perception of a fold or a 3-d corner, as of stairs on a staircase or a wall ending on a floor. The depth impression is ambiguous and reversible like the Necker cube. Such patterns were used by American Indian artists of the Akimel O’odham (Pima) tribe in basketry, and also by modern European and American artists like Josef Albers, Bridget Riley, Victor Vasarely, and Frank Stella. Our research was aimed to find out what manipulations of the visual image affected perceived depth in such patterns in order to learn about the perceptual mechanisms. Using paired comparisons, we found that human observers perceive depth in such patterns if and only if lines in adjacent regions of the patterns joined to form angles, and also if and only if the angles were aligned precisely to be consistent with a fold or 3-d corner. The amount of perceived depth is graded, depending on the steepness and the density of angles in the aligned-angle pattern. The required precision of the alignment implies that early retinotopic visual cortical areas may be involved in this perceptual behavior but the linkage of form with perceived depth suggests involvement of higher cortical areas as well.
PMCID: PMC3063121  PMID: 19065852
21.  The Fusiform Face Area Is Engaged in Holistic, Not Parts-Based, Representation of Faces 
PLoS ONE  2012;7(7):e40390.
Numerous studies with functional magnetic resonance imaging have shown that the fusiform face area (FFA) in the human brain plays a key role in face perception. Recent studies have found that both the featural information of faces (e.g., eyes, nose, and mouth) and the configural information of faces (i.e., spatial relation among features) are encoded in the FFA. However, little is known about whether the featural information is encoded independent of or combined with the configural information in the FFA. Here we used multi-voxel pattern analysis to examine holistic representation of faces in the FFA by correlating spatial patterns of activation with behavioral performance in discriminating face parts with face configurations either present or absent. Behaviorally, the absence of face configurations (versus presence) impaired discrimination of face parts, suggesting a holistic representation in the brain. Neurally, spatial patterns of activation in the FFA were more similar among correct than incorrect trials only when face parts were presented in a veridical face configuration. In contrast, spatial patterns of activation in the occipital face area, as well as the object-selective lateral occipital complex, were more similar among correct than incorrect trials regardless of the presence of veridical face configurations. This finding suggests that in the FFA faces are represented not on the basis of individual parts but in terms of the whole that emerges from the parts.
PMCID: PMC3391267  PMID: 22792301
22.  A Dynamic Neural Field Model of Mesoscopic Cortical Activity Captured with Voltage-Sensitive Dye Imaging 
PLoS Computational Biology  2010;6(9):e1000919.
A neural field model is presented that captures the essential non-linear characteristics of activity dynamics across several millimeters of visual cortex in response to local flashed and moving stimuli. We account for physiological data obtained by voltage-sensitive dye (VSD) imaging which reports mesoscopic population activity at high spatio-temporal resolution. Stimulation included a single flashed square, a single flashed bar, the line-motion paradigm – for which psychophysical studies showed that flashing a square briefly before a bar produces sensation of illusory motion within the bar – and moving squares controls. We consider a two-layer neural field (NF) model describing an excitatory and an inhibitory layer of neurons as a coupled system of non-linear integro-differential equations. Under the assumption that the aggregated activity of both layers is reflected by VSD imaging, our phenomenological model quantitatively accounts for the observed spatio-temporal activity patterns. Moreover, the model generalizes to novel similar stimuli as it matches activity evoked by moving squares of different speeds. Our results indicate that feedback from higher brain areas is not required to produce motion patterns in the case of the illusory line-motion paradigm. Physiological interpretation of the model suggests that a considerable fraction of the VSD signal may be due to inhibitory activity, supporting the notion that balanced intra-layer cortical interactions between inhibitory and excitatory populations play a major role in shaping dynamic stimulus representations in the early visual cortex.
Author Summary
Understanding the functioning of the primary visual cortex requires characterization of the non-linear dynamics that underlie visual perception and of how the cortical architecture gives rise to these dynamics. Recent advances in real-time voltage-sensitive dye (VSD) imaging permit recording of cortical population activity with high spatial and temporal resolution. This wealth of data can be related to cortical function, dynamics, and architecture by computational modeling. Here we used a mesoscopic neural field model to describe brain dynamics at the population level as measured by VSD imaging. Introduced in 1972 by Wilson and Cowan, these models are derived from statistical mechanics to analyze the collective properties of large numbers of neurons. For simplicity, the cortical planar tissue is assumed to contain only two types of homogeneously distributed neurons (excitatory and inhibitory) that interact via recurrent lateral connections. This study shows 1) how a concise neural field model can simulate VSD data quantitatively in space and time by identifying the underlying non-linear dynamics, 2) how such a model can support hypotheses about visual information processing, and 3) how the model can be linked to the neuronal architecture.
PMCID: PMC2936513  PMID: 20838578
23.  Spatial Attention Facilitates Selection of Illusory Objects: Evidence from Event-Related Brain Potentials 
Brain research  2006;1139:143-152.
The relationship between spatial attention and object-based attention has long been debated. On the basis of behavioral evidence it has been hypothesized that these two forms of attention share a common mechanism, such that directing spatial attention to one part of an object facilitates the selection of the entire object. In a previous study (Martinez et al., 2006) we used recordings of event-related potentials (ERPs) during a paradigm modeled after that of Egly et al. (1994) to investigate this relationship. As reported in numerous studies of spatial attention, we found the typical pattern of enhanced neural activity in visual cortex elicited by attended stimuli. Unattended stimuli belonging to the same object as the attended stimuli elicited a very similar spatiotemporal pattern of enhanced neural activity that was localized to lateral occipital cortex (LOC). This similarity was taken as evidence that spatial- and object-selective attention share, at least in part, a common neural mechanism. In the present study we further investigate this relationship by examining whether this spread of spatial attention within attended objects can be guided by objects defined by illusory contours. Subjects viewed a display consisting of two illusory rectangular objects and directed attention to continuous sequences of stimuli (brief onsets) at one end of one of the objects. Stimuli ocurring at irrelevant locations but belonging to the same attended object elicited larger posterior N1 amplitudes than that elicited by unattended objects forming part of a different object. This object-selective N1 enhancement was localized to lateral occipital cortex. The present data support the hypothesis that the allocation of spatial attention can be guided by illusory object boundaries and that this allocation strengthens the perceptual representations of attended objects at the level of visual area LOC.
PMCID: PMC1892231  PMID: 17288996
ERPs; spatial attention; object attention; illusory objects
24.  Sensory Competition in the Face Processing Areas of the Human Brain 
PLoS ONE  2011;6(9):e24450.
The concurrent presentation of multiple stimuli in the visual field may trigger mutually suppressive interactions throughout the ventral visual stream. While several studies have been performed on sensory competition effects among non-face stimuli relatively little is known about the interactions in the human brain for multiple face stimuli. In the present study we analyzed the neuronal basis of sensory competition in an event-related functional magnetic resonance imaging (fMRI) study using multiple face stimuli. We varied the ratio of faces and phase-noise images within a composite display with a constant number of peripheral stimuli, thereby manipulating the competitive interactions between faces. For contralaterally presented stimuli we observed strong competition effects in the fusiform face area (FFA) bilaterally and in the right lateral occipital area (LOC), but not in the occipital face area (OFA), suggesting their different roles in sensory competition. When we increased the spatial distance among pairs of faces the magnitude of suppressive interactions was reduced in the FFA. Surprisingly, the magnitude of competition depended on the visual hemifield of the stimuli: ipsilateral stimulation reduced the competition effects somewhat in the right LOC while it increased them in the left LOC. This suggests a left hemifield dominance of sensory competition. Our results support the sensory competition theory in the processing of multiple faces and suggests that sensory competition occurs in several cortical areas in both cerebral hemispheres.
PMCID: PMC3166313  PMID: 21912694
25.  Stimulus Requirements for Face Perception: An Analysis Based on “Totem Poles” 
The stimulus requirements for perceiving a face are not well defined but are presumably simple, for vivid faces can often by seen in random or natural images such as cloud or rock formations. To characterize these requirements, we measured where observers reported the impression of faces in images defined by symmetric 1/f noise. This allowed us to examine the prominence and properties of different features and their necessary configurations. In these stimuli many faces can be perceived along the vertical midline, and appear stacked at multiple scales, reminiscent of “totem poles.” In addition to symmetry, the faces in noise are invariably upright and thus reveal the inversion effects that are thought to be a defining property of configural face processing. To a large extent, seeing a face required seeing eyes, and these were largely restricted to dark regions in the images. Other features were more subordinate and showed relatively little bias in polarity. Moreover, the prominence of eyes depended primarily on their luminance contrast and showed little influence of chromatic contrast. Notably, most faces were rated as clearly defined with highly distinctive attributes, suggesting that once an image area is coded as a face it is perceptually completed consistent with this interpretation. This suggests that the requisite trigger features are sufficient to holistically “capture” the surrounding noise structure to form the facial representation. Yet despite these well articulated percepts, we show in further experiments that while a pair of dark spots added to noise images appears face-like, these impressions fail to elicit other signatures of face processing, and in particular, fail to elicit an N170 or fixation patterns typical for images of actual faces. These results suggest that very simple stimulus configurations are sufficient to invoke many aspects of holistic and configural face perception while nevertheless failing to fully engage the neural machinery of face coding, implying that that different signatures of face processing may have different stimulus requirements.
PMCID: PMC3569666  PMID: 23407599
face perception; face detection; configural coding; facial features; symmetry; inversion effects; noise

Results 1-25 (1046763)