A moving visual field can induce the feeling of self-motion or vection. Illusory motion from static repeated asymmetric patterns creates a compelling visual motion stimulus, but it is unclear if such illusory motion can induce a feeling of self-motion or alter self-motion perception. In these experiments, human subjects reported the perceived direction of self-motion for sway translation and yaw rotation at the end of a period of viewing set visual stimuli coordinated with varying inertial stimuli. This tested the hypothesis that illusory visual motion would influence self-motion perception in the horizontal plane. Trials were arranged into 5 blocks based on stimulus type: moving star field with yaw rotation, moving star field with sway translation, illusory motion with yaw, illusory motion with sway, and static arrows with sway. Static arrows were used to evaluate the effect of cognitive suggestion on self-motion perception. Each trial had a control condition; the illusory motion controls were altered versions of the experimental image, which removed the illusory motion effect. For the moving visual stimulus, controls were carried out in a dark room. With the arrow visual stimulus, controls were a gray screen. In blocks containing a visual stimulus there was an 8s viewing interval with the inertial stimulus occurring over the final 1s. This allowed measurement of the visual illusion perception using objective methods. When no visual stimulus was present, only the 1s motion stimulus was presented. Eight women and five men (mean age 37) participated. To assess for a shift in self-motion perception, the effect of each visual stimulus on the self-motion stimulus (cm/s) at which subjects were equally likely to report motion in either direction was measured. Significant effects were seen for moving star fields for both translation (p = 0.001) and rotation (p<0.001), and arrows (p = 0.02). For the visual motion stimuli, inertial motion perception was shifted in the direction consistent with the visual stimulus. Arrows had a small effect on self-motion perception driven by a minority of subjects. There was no significant effect of illusory motion on self-motion perception for either translation or rotation (p>0.1 for both). Thus, although a true moving visual field can induce self-motion, results of this study show that illusory motion does not.
Face pareidolia is the illusory perception of non-existent faces. The present study, for the first time, contrasted behavioral and neural responses of face pareidolia with those of letter pareidolia to explore face-specific behavioral and neural responses during illusory face processing. Participants were shown pure-noise images but were led to believe that 50% of them contained either faces or letters; they reported seeing faces or letters illusorily 34% and 38% of the time, respectively. The right fusiform face area (rFFA) showed a specific response when participants “saw” faces as opposed to letters in the pure-noise images. Behavioral responses during face pareidolia produced a classification image that resembled a face, whereas those during letter pareidolia produced a classification image that was letter-like. Further, the extent to which such behavioral classification images resembled faces was directly related to the level of face-specific activations in the right FFA. This finding suggests that the right FFA plays a specific role not only in processing of real faces but also in illusory face perception, perhaps serving to facilitate the interaction between bottom-up information from the primary visual cortex and top-down signals from the prefrontal cortex (PFC). Whole brain analyses revealed a network specialized in face pareidolia, including both the frontal and occipito-temporal regions. Our findings suggest that human face processing has a strong top-down component whereby sensory input with even the slightest suggestion of a face can result in the interpretation of a face.
face processing; fMRI; fusiform face area; top-down processing; face pareidolia
A classification image (CI) technique has shown that static luminance noise near visually completed contours affects the discrimination of fat and thin Kanizsa shapes. These influential noise regions were proposed to reveal “behavioral receptive fields” of completed contours–the same regions to which early cortical cells respond in neurophysiological studies of contour completion. Here, we hypothesized that 1) influential noise regions correspond to the surfaces that distinguish fat and thin shapes (hereafter, key regions); and 2) key region noise biases a “fat” response to the extent that its contrast polarity (lighter or darker than background) matches the shape's filled-in surface color.
To test our hypothesis, we had observers discriminate fat and thin noise-embedded rectangles that were defined by either illusory or luminance-defined contours (Experiment 1). Surrounding elements (“inducers”) caused the shapes to appear either lighter or darker than the background–a process sometimes referred to as lightness induction. For both illusory and luminance-defined rectangles, key region noise biased a fat response to the extent that its contrast polarity (light or dark) matched the induced surface color. When lightness induction was minimized, luminance noise had no consistent influence on shape discrimination. This pattern arose when pixels immediately adjacent to the discriminated boundaries were excluded from the analysis (Experiment 2) and also when the noise was restricted to the key regions so that the noise never overlapped with the physically visible edges (Experiment 3). The lightness effects did not occur in the absence of enclosing boundaries (Experiment 4).
Under noisy conditions, lightness induction alters visually completed shape. Moreover, behavioral receptive fields derived in CI studies do not correspond to contours per se but to filled-in surface regions contained by those contours. The relevance of lightness to two-dimensional shape completion supplies a new constraint for models of object perception.
The human visual system seems to be particularly efficient at detecting faces. This efficiency sometimes comes at the cost of wrongfully seeing faces in arbitrary patterns, including famous examples such as a rock configuration on Mars or a toast's roast patterns. In machine vision, face detection has made considerable progress and has become a standard feature of many digital cameras. The arguably most wide-spread algorithm for such applications (“Viola-Jones” algorithm) achieves high detection rates at high computational efficiency. To what extent do the patterns that the algorithm mistakenly classifies as faces also fool humans? We selected three kinds of stimuli from real-life, first-person perspective movies based on the algorithm's output: correct detections (“real faces”), false positives (“illusory faces”) and correctly rejected locations (“non faces”). Observers were shown pairs of these for 20 ms and had to direct their gaze to the location of the face. We found that illusory faces were mistaken for faces more frequently than non faces. In addition, rotation of the real face yielded more errors, while rotation of the illusory face yielded fewer errors. Using colored stimuli increases overall performance, but does not change the pattern of results. When replacing the eye movement by a manual response, however, the preference for illusory faces over non faces disappeared. Taken together, our data show that humans make similar face-detection errors as the Viola-Jones algorithm, when directing their gaze to briefly presented stimuli. In particular, the relative spatial arrangement of oriented filters seems of relevance. This suggests that efficient face detection in humans is likely to be pre-attentive and based on rather simple features as those encoded in the early visual system.
Looking at our face in a mirror is one of the strongest phenomenological experiences of the Self in which we need to identify the face as reflected in the mirror as belonging to us. Recent behavioral and neuroimaging studies reported that self-face identification not only relies upon visual-mnemonic representation of one’s own face but also upon continuous updating and integration of visuo-tactile signals. Therefore, bodily self-consciousness plays a major role in self-face identification, with respect to interplay between unisensory and multisensory processing. However, if previous studies demonstrated that the integration of multisensory body-related signals contributes to the visual processing of one’s own face, there is so far no data regarding how self-face identification, inversely, contributes to bodily self-consciousness. In the present study, we tested whether self–other face identification impacts either the egocentered or heterocentered visuo-spatial mechanisms that are core processes of bodily self-consciousness and sustain self–other distinction. For that, we developed a new paradigm, named “Double Mirror.” This paradigm, consisting of a semi-transparent double mirror and computer-controlled Light Emitting Diodes, elicits self–other face merging illusory effect in ecologically more valid conditions, i.e., when participants are physically facing each other and interacting. Self-face identification was manipulated by exposing pairs of participants to an Interpersonal Visual Stimulation in which the reflection of their faces merged in the mirror. Participants simultaneously performed visuo-spatial and mental own-body transformation tasks centered on their own face (egocentered) or the face of their partner (heterocentered) in the pre- and post-stimulation phase. We show that self–other face identification altered the egocentered visuo-spatial mechanisms. Heterocentered coding was preserved. Our data suggest that changes in self-face identification induced a bottom-up conflict between the current visual representation and the stored mnemonic representation of one’s own face which, in turn, top-down impacted bodily self-consciousness.
self-face identification; bodily self-consciousness; visuo-spatial mechanisms; self–other distinction
To study top-down face processing, the present study used an experimental paradigm in which participants detected non-existent faces in pure noise images. Conventional BOLD signal analysis identified three regions involved in this illusory face detection. These regions included the left orbitofrontal cortex (OFC) in addition to the right fusiform face area (FFA) and right occipital face area (OFA), both of which were previously known to be involved in both top-down and bottom-up processing of faces. We used Dynamic Causal Modeling (DCM) and Bayesian model selection to further analyze the data, revealing both intrinsic and modulatory effective connectivities among these three cortical regions. Specifically, our results support the claim that the orbitofrontal cortex plays a crucial role in the top-down processing of faces by regulating the activities of the occipital face area, and the occipital face area in turn detects the illusory face features in the visual stimuli and then provides this information to the fusiform face area for further analysis.
Face processing; Top-down processing; Bottom-up processing; Dynamic Causal Modeling (DCM); Orbitofrontal cortex (OFC)
Human electrophysiological studies support a model whereby sensitivity to so-called illusory contour stimuli is first seen within the lateral occipital complex. A challenge to this model posits that the lateral occipital complex is a general site for crude region-based segmentation, based on findings of equivalent hemodynamic activations in the lateral occipital complex to illusory contour and so-called salient region stimuli, a stimulus class that lacks the classic bounding contours of illusory contours. Using high-density electrical mapping of visual evoked potentials, we show that early lateral occipital cortex activity is substantially stronger to illusory contour than to salient region stimuli, while later lateral occipital complex activity is stronger to salient region than to illusory contour stimuli. Our results suggest that equivalent hemodynamic activity to illusory contour and salient region stimuli likely reflects temporally integrated responses, a result of the poor temporal resolution of hemodynamic imaging. The temporal precision of visual evoked potentials is critical for establishing viable models of completion processes and visual scene analysis. We propose that crude spatial segmentation analyses, which are insensitive to illusory contours, occur first within dorsal visual regions, not lateral occipital complex, and that initial illusory contour sensitivity is a function of the lateral occipital complex.
object recognition; event-related potentials; ERP; filling-in
Observing a speaker’s mouth profoundly influences speech perception. For example, listeners perceive an “illusory” “ta” when the video of a face producing /ka/ is dubbed onto an audio /pa/. Here, we show how cortical areas supporting speech production mediate this illusory percept and audiovisual (AV) speech perception more generally. Specifically, cortical activity during AV speech perception occurs in many of the same areas that are active during speech production. We find that different perceptions of the same syllable and the perception of different syllables are associated with different distributions of activity in frontal motor areas involved in speech production. Activity patterns in these frontal motor areas resulting from the illusory “ta” percept are more similar to the activity patterns evoked by AV/ta/ than they are to patterns evoked by AV/pa/ or AV/ka/. In contrast to the activity in frontal motor areas, stimulus-evoked activity for the illusory “ta” in auditory and somatosensory areas and visual areas initially resembles activity evoked by AV/pa/ and AV/ka/, respectively. Ultimately, though, activity in these regions comes to resemble activity evoked by AV/ta/. Together, these results suggest that AV speech elicits in the listener a motor plan for the production of the phoneme that the speaker might have been attempting to produce, and that feedback in the form of efference copy from the motor system ultimately influences the phonetic interpretation.
audiovisual speech perception; efference copy; McGurk effect; mirror system; motor system; prediction
Humans can perceive illusory or subjective contours in the absence of any real physical boundaries. We used an adaptation protocol to look for orientation-selective neural responses to illusory contours defined by phase-shifted abutting line gratings in the human visual cortex. We measured fMRI responses to illusory-contour test stimuli after adapting to an illusory-contour adapter stimulus that was oriented parallel or orthogonal to the test stimulus. We found orientation-selective adaptation to illusory contours in early (V1 and V2) and higher-tier visual areas (V3, hV4, VO1, V3A/B, V7, LO1, LO2). That is, fMRI responses were smaller for test stimuli parallel to the adapter than for test stimuli orthogonal to the adapter. In two control experiments using spatially jittered and phase-randomized stimuli, we demonstrated that this adaptation was not just in response to differences in the distribution of spectral power in the stimuli. Orientation-selective adaptation to illusory contours increased from early to higher-tier visual areas. Thus, both early and higher-tier visual areas contain neurons selective for the orientation of this type of illusory contour.
illusory contours; adaptation; orientation selectivity; fMRI; visual cortex
Crowding is the breakdown in object recognition that occurs in cluttered visual environments [1–4] and the fundamental limit on peripheral vision, affecting identification within many visual modalities [5–9] and across large spatial regions . Though frequently characterized as a disruptive process through which object representations are suppressed [11, 12] or lost altogether [13–15], we demonstrate that crowding systematically changes the appearance of objects. In particular, target patches of visual noise that are surrounded (“crowded”) by oriented Gabor flankers become perceptually oriented, matching the flankers. This was established with a change-detection paradigm: under crowded conditions, target changes from noise to Gabor went unnoticed when the Gabor orientation matched the flankers (and the illusory target percept), despite being easily detected when they differed. Rotation of the flankers (leaving target noise unaltered) also induced illusory target rotations. Blank targets led to similar results, demonstrating that crowding can induce apparent structure where none exists. Finally, adaptation to these stimuli induced a tilt aftereffect at the target location, consistent with signals from the flankers “spreading” across space. These results confirm predictions from change-based models of crowding, such as averaging , and establish crowding as a regularization process that simplifies the peripheral field by promoting consistent appearance among adjacent objects.
► Patches of visual noise become perceptually oriented when crowded by Gabor elements ► Changes from crowded-noise targets to Gabors go unnoticed when perceptually matched ► Adaptation to crowded change gives tilt aftereffects in the target location ► Crowding promotes consistency by regularizing the peripheral visual field
This fMRI study investigated top-down letter processing with an illusory letter detection task. Participants responded whether one of a number of different possible letters was present in a very noisy image. After initial training that became increasingly difficult, they continued to detect letters even though the images consisted of pure noise, which eliminated contamination from strong bottom-up input. For illusory letter detection, greater fMRI activation was observed in several cortical regions. These regions included the precuneus, an area generally involved in top-down processing of objects, and the left superior parietal lobule, an area previously identified with the processing of valid letter and word stimuli. In addition, top-down letter detection also activated the left inferior frontal gyrus, an area that may be involved in the integration of general top-down processing and letter-specific bottom-up processing. These findings suggest that these regions may play a significant role in top-down as well as bottom up processing of letters and words, and are likely to have reciprocal functional connections to more posterior regions in the word and letter processing network.
word processing; letter processing; top-down processing; fMRI
The present study employed Dynamic Causal Modeling to investigate the effective functional connectivity between regions of the neural network involved in top-down letter processing. We used an illusory letter detection paradigm in which participants detected letters while viewing pure noise images. When participants detected letters, the response of the right middle occipital gyrus (MOG) in the visual cortex was enhanced by increased feed-backward connectivity from the left inferior frontal gyrus (IFG). In addition, illusory letter detection increased feed-forward connectivity from the right MOG to the left inferior parietal lobules. Originating in the left IFG, this top-down letter processing network may facilitate the detection of letters by activating letter processing areas within the visual cortex. This activation in turns may highlight the visual features of letters and send letter information to activate the associated phonological representations in the identified parietal region.
letter processing; word processing; top-down processing; fMRI; dynamic causal modeling
This study investigated the relationship between the magnitude of illusory motion in the variants of the “Rotating Snakes” pattern and the visual preference among such patterns. In Experiment 1 we manipulated the outer contour and the internal geometrical structure of the figure to test for corresponding modulations in the perceived illusion magnitude. The strength of illusory motion was estimated by the method of adjustment where the speed of a standard moving figure was matched to the speed of the perceived illusory motion in test figures. We observed modulation of the perceived strength of illusory motion congruent with our geometrical manipulations. In Experiment 2, we directly compared the magnitude of the perceived illusory motion and the preference for these patterns by a method of paired comparison. Images differing in illusion magnitude showed corresponding differences in the reported preference for these patterns. In addition, further analysis revealed that the geometry and lower level image characteristics also substantially contributed to the observed preference ratings. Together these results support the idea that presence of illusory effect and geometrical characteristics determine affective preference for images, as they may be regarded as more interesting, surprising, or fascinating.
motion illusion; esthetic preference; illusion magnitude; geometry of patterns
Schizophrenia is a devastating psychiatric disorder characterized by symptoms including delusions, hallucinations, and disorganized thought. Kanizsa shape perception is a basic visual process that builds illusory contour and shape representations from spatially segregated edges. Recent studies have shown that schizophrenia patients exhibit abnormal electrophysiological signatures during Kanizsa shape perception tasks, but it remains unclear how these abnormalities are manifested behaviorally and whether they arise from early or late levels in visual processing.
To address this issue, we had healthy controls and schizophrenia patients discriminate quartets of sectored circles that either formed or did not form illusory shapes (illusory and fragmented conditions, respectively). Half of the trials in each condition incorporated distractor lines, which are known to disrupt illusory contour formation and thereby worsen illusory shape discrimination.
Relative to their respective fragmented conditions, patients performed worse than controls in the illusory discrimination. Conceptually disorganized patients—characterized by their incoherent manner of speaking—were primarily driving the effect. Regardless of patient status or disorganization levels, distractor lines worsened discrimination more in the illusory than the fragmented condition, indicating that all groups could form illusory contours.
People with schizophrenia form illusory contours but are less able to utilize those contours to discern global shape. The impairment is especially related to the ability to think and speak coherently. These results suggest that Kanizsa shape perception incorporates an early illusory contour formation stage and a later, conceptually-mediated shape integration stage, with the latter being primarily compromised in schizophrenia.
illusory contours; Kanizsa shape perception; visual completion; schizophrenia; conceptual disorganization; thought disorder
Human cortical area MT+ (hMT+) is known to respond to visual motion stimuli, but its causal role in the conscious experience of motion remains largely unexplored. Studies in non-human primates demonstrate that altering activity in area MT can influence motion perception judgments, but animal studies are inherently limited in assessing subjective conscious experience. In the current study, we use functional magnetic resonance imaging (fMRI), intracranial electrocorticography (ECoG), and electrical brain stimulation (EBS) in three patients implanted with intracranial electrodes to address the role of area hMT+ in conscious visual motion perception. We show that in conscious human subjects, reproducible illusory motion can be elicited by electrical stimulation of hMT+. These visual motion percepts only occurred when the site of stimulation overlapped directly with the region of the brain that had increased fMRI and electrophysiological activity during moving compared to static visual stimuli in the same individual subjects. Electrical stimulation in neighboring regions failed to produce illusory motion. Our study provides evidence for the sufficient causal link between the hMT+ network and the human conscious experience of visual motion. It also suggests a clear spatial relationship between fMRI signal and ECoG activity in the human brain.
Evidence suggests that the neural system associated with face processing is a distributed cortical network containing both bottom-up and top-down mechanisms. While bottom-up face processing has been the focus of many studies, the neural areas involved in the top-down face processing have not been extensively investigated due to difficulty in isolating top-down influences from the bottom-up response engendered by presentation of a face. In the present study, we used a novel experimental method to induce illusory face detection. This method allowed for directly examining the neural systems involved in top-down face processing while minimizing the influence of bottom-up perceptual input. A distributed cortical network of top-down face processing was identified by analyzing the functional connectivity patterns of the right fusiform face area (FFA). This distributed cortical network model for face processing includes both “core” and “extended” face processing areas. It also includes left anterior cingulate cortex (ACC), bilateral orbitofrontal cortex (OFC), left dorsolateral prefrontal cortex (DLPFC), left premotor cortex, and left inferior parietal cortex. These findings suggest that top-down face processing contains not only regions for analyzing the visual appearance of faces, but also those involved in processing low spatial frequency (LSF) information, decision making, and working memory.
top-down processing; psychophysiological interaction (PPI); distributed cortical network; fMRI; face processing
Visual illusions are valuable tools for the scientific examination of the mechanisms underlying perception. In the peripheral drift illusion special drift patterns appear to move although they are static. During fixation small involuntary eye movements generate retinal image slips which need to be suppressed for stable perception. Here we show that the peripheral drift illusion reveals the mechanisms of perceptual stabilization associated with these micromovements. In a series of experiments we found that illusory motion was only observed in the peripheral visual field. The strength of illusory motion varied with the degree of micromovements. However, drift patterns presented in the central (but not the peripheral) visual field modulated the strength of illusory peripheral motion. Moreover, although central drift patterns were not perceived as moving, they elicited illusory motion of neutral peripheral patterns. Central drift patterns modulated illusory peripheral motion even when micromovements remained constant. Interestingly, perceptual stabilization was only affected by static drift patterns, but not by real motion signals. Our findings suggest that perceptual instabilities caused by fixational eye movements are corrected by a mechanism that relies on visual rather than extraretinal (proprioceptive or motor) signals, and that drift patterns systematically bias this compensatory mechanism. These mechanisms may be revealed by utilizing static visual patterns that give rise to the peripheral drift illusion, but remain undetected with other patterns. Accordingly, the peripheral drift illusion is of unique value for examining processes of perceptual stabilization.
The detection of a face in a visual scene is the first stage in the face processing hierarchy. Although all subsequent, more elaborate face processing depends on the initial detection of a face, surprisingly little is known about the perceptual mechanisms underlying face detection. Recent evidence suggests that relatively hard-wired face detection mechanisms are broadly tuned to all face-like visual patterns as long as they respect the typical spatial configuration of the eyes above the mouth. Here, we qualify this notion by showing that face detection mechanisms are also sensitive to face shape and facial surface reflectance properties. We used continuous flash suppression (CFS) to render faces invisible at the beginning of a trial and measured the time upright and inverted faces needed to break into awareness. Young Caucasian adult observers were presented with faces from their own race or from another race (race experiment) and with faces from their own age group or from another age group (age experiment). Faces matching the observers’ own race and age group were detected more quickly. Moreover, the advantage of upright over inverted faces in overcoming CFS, i.e., the face inversion effect (FIE), was larger for own-race and own-age faces. These results demonstrate that differences in face shape and surface reflectance influence access to awareness and configural face processing at the initial detection stage. Although we did not collect data from observers of another race or age group, these findings are a first indication that face detection mechanisms are shaped by visual experience with faces from one’s own social group. Such experience-based fine-tuning of face detection mechanisms may equip in-group faces with a competitive advantage for access to conscious awareness.
face perception; face detection; visual awareness; race; age; interocular suppression; continuous flash suppression
Nothing provides as strong a sense of self as seeing one's face. Nevertheless, it remains unknown how the brain processes the sense of self during the multisensory experience of looking at one's face in a mirror. Synchronized visuo-tactile stimulation on one's own and another's face, an experience that is akin to looking in the mirror but seeing another's face, causes the illusory experience of ownership over the other person's face and changes in self-recognition. Here, we investigate the neural correlates of this enfacement illusion using fMRI. We examine activity in the human brain as participants experience tactile stimulation delivered to their face, while observing either temporally synchronous or asynchronous tactile stimulation delivered to another's face on either a specularly congruent or incongruent location. Activity in the multisensory right temporo-parietal junction, intraparietal sulcus, and the unimodal inferior occipital gyrus showed an interaction between the synchronicity and the congruency of the stimulation and varied with the self-reported strength of the illusory experience, which was recorded after each stimulation block. Our results highlight the important interplay between unimodal and multimodal information processing for self-face recognition, and elucidate the neurobiological basis for the plasticity required for identifying with our continuously changing visual appearance.
enfacement; face recognition; fMRI; multisensory; self-recognition
The Shroud of Turin (hereafter the Shroud) is one of the most widely known and widely studied artifacts in existence, with enormous historical and religious significance. For years, the Shroud has inspired worldwide interest in images on its fabric which appear to be of the body and face of a man executed in a manner consistent with crucifixion, and many believe that these images were formed in the Shroud’s fibers during the Resurrection of Jesus of Nazareth. But, more recently, other reports have suggested that the Shroud also contains evidence of inscriptions, and these reports have been used to add crucial support to the view that the Shroud is the burial cloth of Jesus. Unfortunately, these reports of inscriptions are based on marks that are barely visible on the Shroud, even when images are enhanced, and the actual existence of writing on the Shroud is still a matter of considerable debate. Here we discuss previous evidence concerning the psychological processes involved generally in the perception of writing, and especially when letters and words are indistinct. We then report two experiments in which the influence of religious context on perception of inscriptions was addressed specifically, using an image of woven fabric (modern linen) containing no writing and with no religious provenance. This image was viewed in two different contexts: in the Religious Context, participants were informed that the image was of a linen artifact that was important to the Christian faith whereas, in the non-religious Neutral Context, participants were informed that the image was of a simple piece of linen. Both groups were told that the image may contain faint words and were asked to report any words they could see. All participants detected words on the image, and indicated that these words were visible and were able to trace on the image the words they detected. In each experiment, more religious words were detected in the Religious Context condition than in the Neutral Context condition whereas the two contexts showed no effect on the number of non-religious words detected, indicating that religious context had a specific effect on the perception of illusory writing. Indeed, in the Neutral Context condition, no religious words at all were reported in either experiment. These findings suggest that images of woven material, like linen, inspire illusory perceptions of writing and that the nature of these perceptions is influenced considerably by the religious expectations of observers. As a consequence, the normal psychological processes underlying perception of writing, and the tendency of these processes to produce illusory perceptions, should be an essential consideration when addressing the existence of religious inscriptions on religious artifacts such as the Shroud of Turin.
The illusion of apparent motion can be induced when visual stimuli are successively presented at different locations. It has been shown in previous studies that motion-sensitive regions in extrastriate cortex are relevant for the processing of apparent motion, but it is unclear whether primary visual cortex (V1) is also involved in the representation of the illusory motion path. We investigated, in human subjects, apparent-motion-related activity in patches of V1 representing locations along the path of illusory stimulus motion using functional magnetic resonance imaging. Here we show that apparent motion caused a blood-oxygenation-level-dependent response along the V1 representations of the apparent-motion path, including regions that were not directly activated by the apparent-motion-inducing stimuli. This response was unaltered when participants had to perform an attention-demanding task that diverted their attention away from the stimulus. With a bistable motion quartet, we confirmed that the activity was related to the conscious perception of movement. Our data suggest that V1 is part of the network that represents the illusory path of apparent motion. The activation in V1 can be explained either by lateral interactions within V1 or by feedback mechanisms from higher visual areas, especially the motion-sensitive human MT/V5 complex.
Using fMRI in humans, the authors reveal a clear role for V1 cortex in forming an illusory perception of motion when stationary stimuli are successively flashed in different locations.
In glaucoma, the density of retinal ganglion cells is reduced. It is largely unknown how this influences retinal information processing. An increase in spatial summation and a decrease in contrast gain control and contrast adaptation have been reported. A decrease in lateral inhibition might also arise. This could result in a larger than expected response to some stimuli, which could mask ganglion cell loss on functional testing (structure-function discrepancy). The aim of this study was to compare lateral inhibition between glaucoma patients and healthy subjects; we used a case-control design. Cases (n = 18) were selected to have advanced visual field loss in combination with a normal visual acuity. Controls (n = 50) were not allowed to have symptoms or signs of any eye disease. Lateral inhibition was measured psychophysically on a computer screen, with (1) a modified illusory movement experiment and (2) a contrast sensitivity (CS) test. Illusory movement was quantified by nulling it with a real movement; measure of lateral inhibition was the amount of illusory movement. CS was measured at 1 and 4 cycles per degree (cpd); measure of lateral inhibition was the difference between log CS at 4 and 1 cpd. Both measures were compared between cases and controls; analyses were adjusted for age and gender. There was no difference between cases and controls for these two measures of lateral inhibition (p = 0.58 for illusory movement; p = 0.20 for CS). The movement threshold was higher in cases than in controls (p = 0.008) and log CS was lower, at both 1 (-0.20; p = 0.008) and 4 (-0.28; p = 0.001) cpd. Our results indicate that spatially antagonistic mechanisms are not specifically affected in glaucoma, at least not in the intact center of a severely damaged visual field. This suggests that the structure-function discrepancy in glaucoma is not related to a decrease in lateral inhibition.
Numerous psychophysical experiments found that humans preferably rely on a narrow
band of spatial frequencies for recognition of face identity. A recently
conducted theoretical study by the author suggests that this frequency
preference reflects an adaptation of the brain's face processing
machinery to this specific stimulus class (i.e., faces). The purpose of the
present study is to examine this property in greater detail and to specifically
elucidate the implication of internal face features (i.e., eyes, mouth, and
nose). To this end, I parameterized Gabor filters to match the spatial receptive
field of contrast sensitive neurons in the primary visual cortex (simple and
complex cells). Filter responses to a large number of face images were computed,
aligned for internal face features, and response-equalized
(“whitened”). The results demonstrate that the frequency
preference is caused by internal face features. Thus, the psychophysically
observed human frequency bias for face processing seems to be specifically
caused by the intrinsic spatial frequency content of internal face features.
Imagine a photograph showing your friend's face. Although you might
think that every single detail in his face matters for recognizing him, numerous
experiments have shown that the brain prefers a rather coarse resolution
instead. This means that a small rectangular photograph of about 30 to 40 pixels
in width (showing only the face from left ear to right ear) is optimal. But why?
To answer this question, I analyzed a large number of male and female face
images. (The analysis was designed to mimic the way that the brain presumably
processes them.) The analysis was carried out separately for each of the
internal face features (left eye, right eye, mouth, and nose), which permits us
to identify the responsible feature(s) for setting the resolution level, and it
turns out that the eyes and the mouth are responsible for setting it. Thus,
looking at eyes and mouth at the mentioned coarse resolution gives the most
reliable signals for face recognition, and the brain has built-in knowledge
about that. Although a preferred resolution level for face recognition has been
observed for a long time in numerous experiments, this study offers, for the
first time, a plausible explanation.
What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations.
face recognition; reverse correlation; fMRI
There is a distinct visual process that triggers the perception of illusory surfaces and contours along the intersections of aligned, zigzag line patterns. Such illusory contours and surfaces are qualitatively different from illusory contours of the Kanizsa-type. The illusory contours and surfaces in this case are not the product of occlusion and do not imply occlusion of one surface by another. Rather the aligned angles in the patterns are combined by the visual system into the perception of a fold or a 3-d corner, as of stairs on a staircase or a wall ending on a floor. The depth impression is ambiguous and reversible like the Necker cube. Such patterns were used by American Indian artists of the Akimel O’odham (Pima) tribe in basketry, and also by modern European and American artists like Josef Albers, Bridget Riley, Victor Vasarely, and Frank Stella. Our research was aimed to find out what manipulations of the visual image affected perceived depth in such patterns in order to learn about the perceptual mechanisms. Using paired comparisons, we found that human observers perceive depth in such patterns if and only if lines in adjacent regions of the patterns joined to form angles, and also if and only if the angles were aligned precisely to be consistent with a fold or 3-d corner. The amount of perceived depth is graded, depending on the steepness and the density of angles in the aligned-angle pattern. The required precision of the alignment implies that early retinotopic visual cortical areas may be involved in this perceptual behavior but the linkage of form with perceived depth suggests involvement of higher cortical areas as well.