This fMRI study investigated top-down letter processing with an illusory letter detection task. Participants responded whether one of a number of different possible letters was present in a very noisy image. After initial training that became increasingly difficult, they continued to detect letters even though the images consisted of pure noise, which eliminated contamination from strong bottom-up input. For illusory letter detection, greater fMRI activation was observed in several cortical regions. These regions included the precuneus, an area generally involved in top-down processing of objects, and the left superior parietal lobule, an area previously identified with the processing of valid letter and word stimuli. In addition, top-down letter detection also activated the left inferior frontal gyrus, an area that may be involved in the integration of general top-down processing and letter-specific bottom-up processing. These findings suggest that these regions may play a significant role in top-down as well as bottom up processing of letters and words, and are likely to have reciprocal functional connections to more posterior regions in the word and letter processing network.
word processing; letter processing; top-down processing; fMRI
Crowding is the breakdown in object recognition that occurs in cluttered visual environments [1–4] and the fundamental limit on peripheral vision, affecting identification within many visual modalities [5–9] and across large spatial regions . Though frequently characterized as a disruptive process through which object representations are suppressed [11, 12] or lost altogether [13–15], we demonstrate that crowding systematically changes the appearance of objects. In particular, target patches of visual noise that are surrounded (“crowded”) by oriented Gabor flankers become perceptually oriented, matching the flankers. This was established with a change-detection paradigm: under crowded conditions, target changes from noise to Gabor went unnoticed when the Gabor orientation matched the flankers (and the illusory target percept), despite being easily detected when they differed. Rotation of the flankers (leaving target noise unaltered) also induced illusory target rotations. Blank targets led to similar results, demonstrating that crowding can induce apparent structure where none exists. Finally, adaptation to these stimuli induced a tilt aftereffect at the target location, consistent with signals from the flankers “spreading” across space. These results confirm predictions from change-based models of crowding, such as averaging , and establish crowding as a regularization process that simplifies the peripheral field by promoting consistent appearance among adjacent objects.
► Patches of visual noise become perceptually oriented when crowded by Gabor elements ► Changes from crowded-noise targets to Gabors go unnoticed when perceptually matched ► Adaptation to crowded change gives tilt aftereffects in the target location ► Crowding promotes consistency by regularizing the peripheral visual field
Visual illusions are valuable tools for the scientific examination of the mechanisms underlying perception. In the peripheral drift illusion special drift patterns appear to move although they are static. During fixation small involuntary eye movements generate retinal image slips which need to be suppressed for stable perception. Here we show that the peripheral drift illusion reveals the mechanisms of perceptual stabilization associated with these micromovements. In a series of experiments we found that illusory motion was only observed in the peripheral visual field. The strength of illusory motion varied with the degree of micromovements. However, drift patterns presented in the central (but not the peripheral) visual field modulated the strength of illusory peripheral motion. Moreover, although central drift patterns were not perceived as moving, they elicited illusory motion of neutral peripheral patterns. Central drift patterns modulated illusory peripheral motion even when micromovements remained constant. Interestingly, perceptual stabilization was only affected by static drift patterns, but not by real motion signals. Our findings suggest that perceptual instabilities caused by fixational eye movements are corrected by a mechanism that relies on visual rather than extraretinal (proprioceptive or motor) signals, and that drift patterns systematically bias this compensatory mechanism. These mechanisms may be revealed by utilizing static visual patterns that give rise to the peripheral drift illusion, but remain undetected with other patterns. Accordingly, the peripheral drift illusion is of unique value for examining processes of perceptual stabilization.
Observing a speaker’s mouth profoundly influences speech perception. For example, listeners perceive an “illusory” “ta” when the video of a face producing /ka/ is dubbed onto an audio /pa/. Here, we show how cortical areas supporting speech production mediate this illusory percept and audiovisual (AV) speech perception more generally. Specifically, cortical activity during AV speech perception occurs in many of the same areas that are active during speech production. We find that different perceptions of the same syllable and the perception of different syllables are associated with different distributions of activity in frontal motor areas involved in speech production. Activity patterns in these frontal motor areas resulting from the illusory “ta” percept are more similar to the activity patterns evoked by AV/ta/ than they are to patterns evoked by AV/pa/ or AV/ka/. In contrast to the activity in frontal motor areas, stimulus-evoked activity for the illusory “ta” in auditory and somatosensory areas and visual areas initially resembles activity evoked by AV/pa/ and AV/ka/, respectively. Ultimately, though, activity in these regions comes to resemble activity evoked by AV/ta/. Together, these results suggest that AV speech elicits in the listener a motor plan for the production of the phoneme that the speaker might have been attempting to produce, and that feedback in the form of efference copy from the motor system ultimately influences the phonetic interpretation.
audiovisual speech perception; efference copy; McGurk effect; mirror system; motor system; prediction
Combination of visual and kinesthetic information is essential to perceive bodily movements. We conducted behavioral and functional magnetic resonance imaging experiments to investigate the neuronal correlates of visuokinesthetic combination in perception of hand movement. Participants experienced illusory flexion movement of their hand elicited by tendon vibration while they viewed video-recorded flexion (congruent: CONG) or extension (incongruent: INCONG) motions of their hand. The amount of illusory experience was graded by the visual velocities only when visual information regarding hand motion was concordant with kinesthetic information (CONG). The left posterolateral cerebellum was specifically recruited under the CONG, and this left cerebellar activation was consistent for both left and right hands. The left cerebellar activity reflected the participants' intensity of illusory hand movement under the CONG, and we further showed that coupling of activity between the left cerebellum and the “right” parietal cortex emerges during this visuokinesthetic combination/perception. The “left” cerebellum, working with the anatomically connected high-order bodily region of the “right” parietal cortex, participates in online combination of exteroceptive (vision) and interoceptive (kinesthesia) information to perceive hand movement. The cerebro–cerebellar interaction may underlie updating of one's “body image,” when perceiving bodily movement from visual and kinesthetic information.
cerebro–cerebellar interaction; functional magnetic resonance imaging (fMRI); kinesthesia; limb movement; multisensory; tendon vibration
The human visual system seems to be particularly efficient at detecting faces. This efficiency sometimes comes at the cost of wrongfully seeing faces in arbitrary patterns, including famous examples such as a rock configuration on Mars or a toast's roast patterns. In machine vision, face detection has made considerable progress and has become a standard feature of many digital cameras. The arguably most wide-spread algorithm for such applications (“Viola-Jones” algorithm) achieves high detection rates at high computational efficiency. To what extent do the patterns that the algorithm mistakenly classifies as faces also fool humans? We selected three kinds of stimuli from real-life, first-person perspective movies based on the algorithm's output: correct detections (“real faces”), false positives (“illusory faces”) and correctly rejected locations (“non faces”). Observers were shown pairs of these for 20 ms and had to direct their gaze to the location of the face. We found that illusory faces were mistaken for faces more frequently than non faces. In addition, rotation of the real face yielded more errors, while rotation of the illusory face yielded fewer errors. Using colored stimuli increases overall performance, but does not change the pattern of results. When replacing the eye movement by a manual response, however, the preference for illusory faces over non faces disappeared. Taken together, our data show that humans make similar face-detection errors as the Viola-Jones algorithm, when directing their gaze to briefly presented stimuli. In particular, the relative spatial arrangement of oriented filters seems of relevance. This suggests that efficient face detection in humans is likely to be pre-attentive and based on rather simple features as those encoded in the early visual system.
This study investigated the relationship between the magnitude of illusory motion in the variants of the “Rotating Snakes” pattern and the visual preference among such patterns. In Experiment 1 we manipulated the outer contour and the internal geometrical structure of the figure to test for corresponding modulations in the perceived illusion magnitude. The strength of illusory motion was estimated by the method of adjustment where the speed of a standard moving figure was matched to the speed of the perceived illusory motion in test figures. We observed modulation of the perceived strength of illusory motion congruent with our geometrical manipulations. In Experiment 2, we directly compared the magnitude of the perceived illusory motion and the preference for these patterns by a method of paired comparison. Images differing in illusion magnitude showed corresponding differences in the reported preference for these patterns. In addition, further analysis revealed that the geometry and lower level image characteristics also substantially contributed to the observed preference ratings. Together these results support the idea that presence of illusory effect and geometrical characteristics determine affective preference for images, as they may be regarded as more interesting, surprising, or fascinating.
motion illusion; esthetic preference; illusion magnitude; geometry of patterns
The present study investigated whether emotionally expressive faces guide attention and modulate fMRI activity in fusiform gyrus in acquired prosopagnosia. Patient PS, a pure case of acquired prosopagnosia with intact right middle fusiform gyrus, performed two behavioral experiments and a functional imaging experiment to address these questions. In a visual search task involving face stimuli, PS was faster to select the target face when it was expressing fear or happiness as compared to when it was emotionally neutral. In a change detection task, PS detected significantly more changes when the changed face was fearful as compared to when it was neutral. Finally, an fMRI experiment showed enhanced activation to emotionally expressive faces and bodies in right fusiform gyrus. In addition, PS showed normal body-selective activation in right fusiform gyrus, partially overlapping the fusiform face area. Together these behavioral and neuroimaging results show that attention was preferentially allocated to emotional faces in patient PS, as observed in healthy subjects. We conclude that systems involved in the emotional guidance of attention by facial expression can function normally in acquired prosopagnosia, and can thus be dissociated from systems involved in face identification.
prosopagnosia; emotion; face processing; FFA; FBA; attentional capture
The brain uses context and prior knowledge to repair degraded sensory inputs and improve perception. For example, listeners hear speech continuing uninterrupted through brief noises, even if the speech signal is artificially removed from the noisy epochs. In a functional MRI study, we show that this temporal filling-in process is based on two dissociable neural mechanisms: the subjective experience of illusory continuity, and the sensory repair mechanisms that support it. Areas mediating illusory continuity include the left posterior angular gyrus (AG) and superior temporal sulcus (STS) and the right STS. Unconscious sensory repair occurs in Broca’s area, bilateral anterior insula, and pre-supplementary motor area. The left AG/STS and all the repair regions show evidence for word-level template matching and communicate more when fewer acoustic cues are available. These results support a two-path process where the brain creates coherent perceptual objects by applying prior knowledge and filling-in corrupted sensory information.
Auditory induction; Continuity illusion; fMRI; Perceptual filling-in; Phonemic restoration; Speech
The processing of Kanizsa figures have classically been studied by flashing the full “pacmen” inducers at stimulus onset. A recent study, however, has shown that it is advantageous to present illusory figures in the “notch” mode of presentation, that is by leaving the round inducers on screen at all times and by removing the inward-oriented notches delineating the illusory figure at stimulus onset. Indeed, using the notch mode of presentation, novel P1and N1 effects have been found when comparing visual potentials (VEPs) evoked by an illusory figure and the VEPs to a control figure whose onset corresponds to the removal of outward-oriented notches, which prevents their integration into one delineated form. In Experiment 1, we replicated these findings, the illusory figure was found to evoke a larger P1 and a smaller N1 than its control. In Experiment 2, real grey squares were placed over the notches so that one condition, that with inward-oriented notches, shows a large central grey square and the other condition, that with outward-oriented notches, shows four unconnected smaller grey squares. In response to these “real” figures, no P1 effect was found but a N1 effect comparable to the one obtained with illusory figures was observed. Taken together, these results suggest that the P1 effect observed with illusory figures is likely specific to the processing of the illusory features of the figures. Conversely, the fact that the N1 effect was also obtained with real figures indicates that this effect may be due to more global processes related to depth segmentation or surface/object perception.
We used functional magnetic resonance imaging (fMRI) to study neural correlates of a robust somatosensory illusion that can dissociate tactile perception from physical stimulation. Repeated rapid stimulation at the wrist, then near the elbow, can create the illusion of touches at intervening locations along the arm, as if a rabbit hopped along it. We examined brain activity in humans using fMRI, with improved spatial resolution, during this version of the classic cutaneous rabbit illusion. As compared with control stimulation at the same skin sites (but in a different order that did not induce the illusion), illusory sequences activated contralateral primary somatosensory cortex, at a somatotopic location corresponding to the filled-in illusory perception on the forearm. Moreover, the amplitude of this somatosensory activation was comparable to that for veridical stimulation including the intervening position on the arm. The illusion additionally activated areas of premotor and prefrontal cortex. These results provide direct evidence that illusory somatosensory percepts can affect primary somatosensory cortex in a manner that corresponds somatotopically to the illusory percept.
High resolution fMRI was used to demonstrate activity in primary somatosensory cortex in response to the illusion of a sensory stimulus.
Many animals rely on visual motion detection for survival. Motion information is extracted from spatiotemporal intensity patterns on the retina, a paradigmatic neural computation. A phenomenological model, the Hassenstein-Reichardt Correlator (HRC), relates visual inputs to neural and behavioral responses to motion, but the circuits that implement this computation remain unknown. Using cell-type specific genetic silencing, minimal motion stimuli, and in vivo calcium imaging, we examine two critical HRC inputs. These two pathways respond preferentially to light and dark moving edges. We demonstrate that these pathways perform overlapping but complementary subsets of the computations underlying the HRC. A numerical model implementing differential weighting of these operations displays the observed edge preferences. Intriguingly, these pathways are distinguished by their sensitivities to a stimulus correlation that corresponds to an illusory percept, “reverse phi”, that affects many species. Thus, this computational architecture may be widely used to achieve edge selectivity in motion detection.
Visual perception is not a passive process: in order to efficiently process visual input, the brain actively uses previous knowledge (e.g., memory) and expectations about what the world should look like. However, perception is not only influenced by previous knowledge. Especially the perception of emotional stimuli is influenced by the emotional state of the observer. In other words, how we perceive the world does not only depend on what we know of the world, but also by how we feel. In this study, we further investigated the relation between mood and perception.
Methods and Findings
We let observers do a difficult stimulus detection task, in which they had to detect schematic happy and sad faces embedded in noise. Mood was manipulated by means of music. We found that observers were more accurate in detecting faces congruent with their mood, corroborating earlier research. However, in trials in which no actual face was presented, observers made a significant number of false alarms. The content of these false alarms, or illusory percepts, was strongly influenced by the observers' mood.
As illusory percepts are believed to reflect the content of internal representations that are employed by the brain during top-down processing of visual input, we conclude that top-down modulation of visual processing is not purely predictive in nature: mood, in this case manipulated by music, may also directly alter the way we perceive the world.
Human cortical area MT+ (hMT+) is known to respond to visual motion stimuli, but its causal role in the conscious experience of motion remains largely unexplored. Studies in non-human primates demonstrate that altering activity in area MT can influence motion perception judgments, but animal studies are inherently limited in assessing subjective conscious experience. In the current study, we use functional magnetic resonance imaging (fMRI), intracranial electrocorticography (ECoG), and electrical brain stimulation (EBS) in three patients implanted with intracranial electrodes to address the role of area hMT+ in conscious visual motion perception. We show that in conscious human subjects, reproducible illusory motion can be elicited by electrical stimulation of hMT+. These visual motion percepts only occurred when the site of stimulation overlapped directly with the region of the brain that had increased fMRI and electrophysiological activity during moving compared to static visual stimuli in the same individual subjects. Electrical stimulation in neighboring regions failed to produce illusory motion. Our study provides evidence for the sufficient causal link between the hMT+ network and the human conscious experience of visual motion. It also suggests a clear spatial relationship between fMRI signal and ECoG activity in the human brain.
Neural activity fluctuates dynamically with time, and these changes have been reported to be of behavioral significance, despite occurring spontaneously. Through electroencephalography (EEG), fluctuations in α-band (8–14 Hz) activity have been identified over posterior sites that covary on a trial-by-trial basis with whether an upcoming visual stimulus will be detected or not. These fluctuations are thought to index the momentary state of visual cortex excitability. Here, we tested this hypothesis by directly exciting human visual cortex via transcranial magnetic stimulation (TMS) to induce illusory visual percepts (phosphenes) in blindfolded participants, while simultaneously recording EEG. We found that identical TMS-stimuli evoked a percept (P-yes) or not (P-no) depending on prestimulus α-activity. Low prestimulus α-band power resulted in TMS reliably inducing phosphenes (P-yes trials), whereas high prestimulus α-values led the same TMS-stimuli failing to evoke a visual percept (P-no trials). Additional analyses indicated that the perceptually relevant fluctuations in α-activity/visual cortex excitability were spatially specific and occurred on a subsecond time scale in a recurrent pattern. Our data directly link momentary levels of posterior α-band activity to distinct states of visual cortex excitability, and suggest that their spontaneous fluctuation constitutes a visual operation mode that is activated automatically even without retinal input.
electroencephalography (EEG); phosphene; state dependency; transcranial magnetic stimulation (TMS); visual perception
Previous work has shown differential amygdala response to African-American faces by Caucasian individuals. Furthermore, behavioral studies have demonstrated the existence of skin tone bias, the tendency to prefer light skin to dark skin. In the present study, we used functional magnetic resonance imaging (fMRI) to investigate whether skin tone bias moderates differential race-related amygdala activity. Eleven White participants viewed photographs of unfamiliar Black and White faces with varied skin tone (light, dark). Replicating past research, greater amygdala activity was observed for Black faces than White faces. Furthermore, dark-skinned targets elicited more amygdala activity than light-skinned targets. However, these results were qualified by a significant interaction between race and skin tone, such that amygdala activity was observed at equivalent levels for light- and dark-skinned Black targets, but dark-skinned White targets elicited greater amygdala activity than light-skinned White targets.
skin tone bias; functional magnetic resonance imaging; amygdala
Autoscopic phenomena are psychic illusory visual experiences consisting of the perception of the image of one's own body or face within space, either from an internal point of view, as in a mirror or from an external point of view. Descriptions based on phenomenological criteria distinguish six types of autoscopic experiences: autoscopic hallucination, he-autoscopy or heautoscopic proper, feeling of a presence, out of body experience, negative and inner forms of autoscopy.
Methods and results
We report a case of a patient with he-autoscopic seizures. EEG recordings during the autoscopic experience showed a right parietal epileptic focus. This finding confirms the involvement of the temporo-parietal junction in the abnormal body perception during autoscopic phenomena. We discuss and review previous literature on the topic, as different localization of cortical areas are reported suggesting that out of body experience is generated in the right hemisphere while he-autoscopy involves left hemisphere structures.
Boundary contours are important for representing binocular surfaces, including those in binocular rivalry. Ooi & He (2006) showed that a half-image with a boundary contour defined by abutting gratings predominates in binocular rivalry. The present study investigates the monocular-boundary-contour mechanism using Kanizsa square-like rivalry displays. In Experiment 1, the left half-image had a vertical illusory contour on the right edge while the right half-image had a vertical illusory contour on the left edge. The Kanizsa-elements (discs and pacmen) were filled with 135deg-grating and placed on a 45deg-grating background. When fused, observers experienced a strong predominance for perceiving an illusory rectangle in front of four discs. But this percept was replaced by robust rivalry alternations when the stimulus was manipulated by (i) switching the half-images between eyes, (ii) relocating the pacmen in each half-image to form horizontal illusory contours, or (iii) placing the pacmen diagonally (thus eliminating each monocular illusory contour). Such robust rivalry alternations were similar to those experienced when a 135deg-grating disc rivaled with a 135deg-grating pacman alone on the 45deg-grating background (Experiment 2). Experiment 3 showed that the relatively stable illusory rectangle percept in Experiment 1 is affected by the alignment of the images in the two eyes, in a manner consistent with an adherence to the occlusion constraint in binocular surface formation.
Our visual percepts are not fully determined by physical stimulus inputs. Thus, in visual illusions such as the Kanizsa figure, inducers presented at the corners allow one to perceive the bounding contours of the figure in the absence of luminance-defined borders. We examined the discrimination of the curvature of these illusory contours that pass across retinal scotomas caused by macular degeneration. In contrast with previous studies with normal-sighted subjects that showed no perception of these illusory contours in the region of physiological scotomas at the optic nerve head, we demonstrated perfect discrimination of the curvature of the illusory contours over the pathological retinal scotoma. The illusion occurred despite the large scar around the macular lesion, strongly reducing discrimination of whether the inducer openings were acute or obtuse and suggesting that the coarse information in the inducers (low spatial frequency) sufficed. The result that subjective contours can pass through the pathological retinal scotoma suggests that the visual cortex, despite the loss of bottom-up input, can use low-spatial frequency information from the inducers to form a neural representation of new complex geometrical shapes inside the scotoma.
The stimulus requirements for perceiving a face are not well defined but are presumably simple, for vivid faces can often by seen in random or natural images such as cloud or rock formations. To characterize these requirements, we measured where observers reported the impression of faces in images defined by symmetric 1/f noise. This allowed us to examine the prominence and properties of different features and their necessary configurations. In these stimuli many faces can be perceived along the vertical midline, and appear stacked at multiple scales, reminiscent of “totem poles.” In addition to symmetry, the faces in noise are invariably upright and thus reveal the inversion effects that are thought to be a defining property of configural face processing. To a large extent, seeing a face required seeing eyes, and these were largely restricted to dark regions in the images. Other features were more subordinate and showed relatively little bias in polarity. Moreover, the prominence of eyes depended primarily on their luminance contrast and showed little influence of chromatic contrast. Notably, most faces were rated as clearly defined with highly distinctive attributes, suggesting that once an image area is coded as a face it is perceptually completed consistent with this interpretation. This suggests that the requisite trigger features are sufficient to holistically “capture” the surrounding noise structure to form the facial representation. Yet despite these well articulated percepts, we show in further experiments that while a pair of dark spots added to noise images appears face-like, these impressions fail to elicit other signatures of face processing, and in particular, fail to elicit an N170 or fixation patterns typical for images of actual faces. These results suggest that very simple stimulus configurations are sufficient to invoke many aspects of holistic and configural face perception while nevertheless failing to fully engage the neural machinery of face coding, implying that that different signatures of face processing may have different stimulus requirements.
face perception; face detection; configural coding; facial features; symmetry; inversion effects; noise
Neuroimaging studies have identified a common network of brain regions involving the prefrontal and parietal cortices across a variety of working memory (WM) tasks. However, previous studies have also reported category-specific dissociations of activation within this network. In this study, we investigated the development of category-specific activation in a WM task with digits, letters, and faces. Eight-year-old children and adults performed a 2-back WM task while their brain activity was measured using functional magnetic resonance imaging (fMRI). Overall, children were significantly slower and less accurate than adults on all three WM conditions (digits, letters, and faces); however, within each age group, behavioral performance across the three conditions was very similar. FMRI results revealed category-specific activation in adults but not children in the intraparietal sulcus for the digit condition. Likewise, during the letter condition, category-specific activation was observed in adults but not children in the left occipital–temporal cortex. In contrast, children and adults showed highly similar brain-activity patterns in the lateral fusiform gyri when solving the 2-back WM task with face stimuli. Our results suggest that 8-year-old children do not yet engage the typical brain regions that have been associated with abstract or semantic processing of numerical symbols and letters when these processes are task-irrelevant and the primary task is demanding. Nevertheless, brain activity in letter-responsive areas predicted children’s spelling performance underscoring the relationship between abstract processing of letters and linguistic abilities. Lastly, behavioral performance on the WM task was predictive of math and language abilities highlighting the connection between WM and other cognitive abilities in development.
Adaptation to motion produces a motion aftereffect (MAE), where illusory, oppositely-directed motion is perceived when viewing a stationary image. A common hypothesis for motion adaptation is that it reflects an imbalance of activity caused by neuronal fatigue. However, the perceptual MAE exhibits storage, in that the MAE appears even after a prolonged period of darkness is interposed between the adapting stimulus and the test, suggesting that fatigue cannot explain the perceptual MAE. We asked whether neural fatigue was a viable explanation for the oculomotor MAE (OMAE) by testing if the OMAE exhibits storage. Human observers were adapted with moving, random-dot cinematograms. Following adaptation, they generated an oculomotor MAE (OMAE), with both pursuit and saccadic components. The OMAE occurred in the presence of a visual test stimulus, but not in the dark. When the test stimulus was introduced after the dark period, the OMAE reappeared, analogous to perceptual MAE storage. The results suggest that fatigue cannot explain the OMAE, and that visual stimulation is necessary to elicit it. We propose a model in which adaptation recalibrates the motion-processing network by adjusting the weights of the inputs to neurons in the middle-temporal (MT) area.
The human auditory system perceptually restores short deleted segments of speech and other sounds (e.g. tones) when the resulting silent gaps are filled by a potential masking noise. When this phenomenon, known as ‘auditory induction’, occurs, listeners experience the illusion of hearing an ongoing sound continuing through the interrupting noise even though the perceived sound is not physically present. Such illusions suggest that a key function of the auditory system is to allow listeners to perceive complete auditory objects with incomplete acoustic information, as may often be the case in multisource acoustic environments. At present, however, we know little about the possible functions of auditory induction in the sound-mediated behaviours of animals. The present study used two-choice phonotaxis experiments to test the hypothesis that female grey treefrogs, Hyla chrysoscelis, experience the illusory perceptual restoration of discrete pulses in the male advertisement call when pulses are deleted and replaced by a potential masking noise. While added noise restored some attractiveness to calls with missing pulses, there was little evidence to suggest that the frogs actually experienced the illusion of perceiving the missing pulses. Instead, the added noise appeared to function as an acoustic appendage that made some calls more attractive than others as a result of sensory biases, the expression of which depended on the temporal order and acoustic structure of the added appendages.
auditory grouping; auditory induction; auditory scene analysis; continuity illusion; gray treefrog; perceptual restoration; phonemic restoration; sensory bias; temporal induction
It has been hypothesized that the amygdala mediates the processing advantage of emotional items. In the present study, we employed functional magnetic resonance imaging (fMRI) to investigate how fear conditioning affected the visual processing of task-irrelevant faces. We hypothesized that faces previously paired with shock (threat faces) would more effectively vie for processing resources during conditions involving spatial competition. To investigate this question, following conditioning, participants performed a letter-detection task on an array of letters that was superimposed on task-irrelevant faces. 7 Attentional resources were manipulated by having participants perform an easy or a difficult search task. Our findings revealed that threat fearful faces evoked stronger responses in the amygdala and fusiform gyrus relative to safe fearful faces during low-load attentional conditions, but not during high-load conditions. Consistent with the increased processing of shock-paired stimuli during the low-load condition, such stimuli exhibited increased behavioral priming and fMRI repetition effects relative to unpaired faces during a subsequent implicit-memory task. Overall, our results suggest a competition model in which affective significance signals from the amygdala may constitute a key modulatory factor determining the neural fate of visual stimuli. In addition, it appears that such competitive advantage is only evident when sufficient processing resources are available to process the affective stimulus.
attention; emotion; fear conditioning; facial expressions; fMRI
Face perception is a critical social ability and identifying its neural correlates is important from both basic and applied perspectives. In EEG recordings, faces elicit a distinct electrophysiological signature, the N170, which has a larger amplitude and shorter latency in response to faces compared to other objects. However, determining the face specificity of any neural marker for face perception hinges on finding an appropriate control stimulus. We used a novel stimulus set consisting of 300 images that spanned a continuum between random patches of natural scenes and genuine faces, in order to explore the selectivity of face-sensitive ERP responses with a model-based parametric stimulus set. Critically, our database contained “false alarm” images that were misclassified as faces a computational face-detection system and varied in their image-level similarity to real faces. High-density (128-channel) event-related potentials (ERPs) were recorded while 23 adult subjects viewed all 300 images in random order, and determined whether each image was a face or non-face. The goal of our analyses was to determine the extent to which a gradient of sensitivity to face-like structure was evident in the ERP signal. Traditional waveform analyses revealed that the N170 component over occipitotemporal electrodes was larger in amplitude for faces compared to all non-faces, even those that were high in image similarity to faces, suggesting strict selectivity for veridical face stimuli. By contrast, single-trial classification of the entire waveform measured at the same sensors revealed that misclassifications of non-face patterns as faces increased with image-level similarity to faces. These results suggest that individual components may exhibit steep selectivity, but integration of multiple waveform features may afford graded information regarding stimulus appearance.