PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (34)
 

Clipboard (0)
None

Select a Filter Below

Year of Publication
Document Types
1.  Learned face-voice pairings facilitate visual search 
Psychonomic bulletin & review  2015;22(2):429-436.
Voices provide a rich source of information that is important for identifying individuals and for social interaction. During search for a face in a crowd, voices often accompany visual information and they facilitate localization of the sought individual. However, it is unclear whether this facilitation occurs primarily because the voice cues the location of the face or because it also increases the salience of the associated face. Here we demonstrate that a voice that provides no location information nonetheless facilitates visual search for an associated face. We trained novel face/voice associations and verified learning using a two-alternative forced-choice task in which participants had to correctly match a presented voice to the associated face. Following training, participants searched for a previously learned target face among other faces while hearing one of the following sounds (localized at the center of the display): a congruent-learned voice, an incongruent but familiar voice, an unlearned and unfamiliar voice, or a time-reversed voice. Only the congruent-learned voice speeded visual search for the associated face. This result suggests that voices facilitate visual detection of associated faces, potentially by increasing their visual salience, and that the underlying crossmodal associations can be established through brief training.
doi:10.3758/s13423-014-0685-3
PMCID: PMC4295001  PMID: 25023955
Crossmodal integration; Visual search; Face perception; Spatial attention
2.  Direction of auditory pitch-change influences visual search for slope from graphs 
Perception  2015;44(7):764-778.
Linear trend (slope) is important information conveyed by graphs. We investigated how sounds influenced slope detection in a visual search paradigm. Four bar graphs or scatter plots were presented on each trial. Participants looked for a positive-slope or a negative-slope target (in blocked trials), and responded to targets in a go/no-go fashion. For example, in a positive-slope-target block, the target graph displayed a positive slope while other graphs displayed negative slopes (a go trial), or all graphs displayed negative slopes (a no-go trial). When an ascending or descending sound was presented concurrently, ascending sounds slowed detection of negative-slope targets whereas descending sounds slowed detection of positive-slope targets. The sounds had no effect when they immediately preceded the visual search displays, suggesting that the results were due to crossmodal interaction rather than priming. The sounds also had no effect when targets were words describing slopes, such as “positive,” “negative,” “increasing,” or “decreasing,” suggesting that the results were unlikely due to semantic-level interactions. Manipulations of spatiotemporal similarity between sounds and graphs had little effect. These results suggest that ascending and descending sounds influence visual search for slope based on a general association between the direction of auditory pitch-change and visual linear trend.
PMCID: PMC4638167  PMID: 26541054
3.  Haptic guidance of overt visual attention 
Research has shown that information accessed from one sensory modality can influence perceptual and attentional processes in another modality. Here, we demonstrated a novel crossmodal influence of haptic-shape information on visual attention. Participants visually searched for a target object (e.g., an orange) presented among distractor objects, fixating the target as quickly as possible. While searching for the target, participants held (never viewed and out of sight) an item of a specific shape in their hands. In two experiments, we demonstrated that the time for the eyes to reach a target—a measure of overt visual attention—was reduced when the shape of the held item (e.g., a sphere) was consistent with the shape of the visual target (e.g., an orange), relative to when the held shape was unrelated to the target (e.g., a hockey puck) or when no shape was held. This haptic-to-visual facilitation occurred despite the fact that the held shapes were not predictive of the visual targets’ shapes, suggesting that the crossmodal influence occurred automatically, reflecting shape-specific haptic guidance of overt visual attention.
doi:10.3758/s13414-014-0696-1
PMCID: PMC4231009  PMID: 24935805
Multisensory processing; Visual search; Attention
4.  Lip reading without awareness 
Psychological science  2014;25(9):1835-1837.
doi:10.1177/0956797614542132
PMCID: PMC4303247  PMID: 25060525
5.  In the working memory of the beholder: Art appreciation is enhanced when visual complexity is compatible with working memory 
What shapes art appreciation? Much research has focused on the importance of visual features themselves (e.g., symmetry, natural scene statistics) and of the viewer’s experience and expertise with specific artworks. However, even after taking these factors into account, there are considerable individual differences in art preferences. Our new result suggests that art preference is also influenced by the compatibility between visual properties and the characteristics of the viewer’s visual system. Specifically, we have demonstrated, using 120 artworks from diverse periods, cultures, genres and styles, that art appreciation is increased when the level of visual complexity within an artwork is compatible with the viewer’s visual working memory capacity. The result highlights the importance of the interaction between visual features and the beholder’s general visual capacity in shaping art appreciation.
doi:10.1037/a0039314
PMCID: PMC4556127  PMID: 25984587
visual-object working memory; art; complexity
6.  A relational structure of voluntary visual-attention abilities 
Many studies have examined attention mechanisms involved in specific behavioral tasks (e.g., search, tracking, distractor inhibition). However, relatively little is known about the relationships among those attention mechanisms. Is there a fundamental attention faculty that makes a person superior or inferior at most types of attention tasks, or do relatively independent processes mediate different attention skills? We focused on individual differences in voluntary visual-attention abilities using a battery of eleven representative tasks. An application of parallel analysis, hierarchical-cluster analysis, and multidimensional scaling to the inter-task correlation matrix revealed four functional clusters, representing spatiotemporal attention, global attention, transient attention, and sustained attention, organized along two dimensions, one contrasting spatiotemporal and global attention and the other contrasting transient and sustained attention. Comparison with the neuroscience literature suggests that the spatiotemporal-global dimension corresponds to the dorsal frontoparietal circuit and the transient-sustained dimension corresponds to the ventral frontoparietal circuit, with distinct sub-regions mediating the separate clusters within each dimension. We also obtained highly specific patterns of gender difference, and of deficits for college students with elevated ADHD traits. These group differences suggest that different mechanisms of voluntary visual attention can be selectively strengthened or weakened based on genetic, experiential, and/or pathological factors.
doi:10.1037/a0039000
PMCID: PMC4553040  PMID: 25867505
7.  Audition dominates vision in duration perception irrespective of salience, attention, and temporal discriminability 
Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances.
doi:10.3758/s13414-014-0663-x
PMCID: PMC4096074  PMID: 24806403
Time perception; Modality effect; Auditory dominance; Selective attention
8.  Parietal connectivity mediates multisensory facilitation 
NeuroImage  2013;78:396-401.
Our senses interact in daily life through multisensory integration, facilitating perceptual processes and behavioral responses. The neural mechanisms proposed to underlie this multisensory facilitation include anatomical connections directly linking early sensory areas, indirect connections to higher-order multisensory regions, as well as thalamic connections. Here we examine the relationship between white matter connectivity, as assessed with diffusion tensor imaging, and individual differences in multisensory facilitation and provide the first demonstration of a relationship between anatomical connectivity and multisensory processing in typically developed individuals. Using a whole-brain analysis and contrasting anatomical models of multisensory processing we found that increased connectivity between parietal regions and early sensory areas was associated with the facilitation of reaction times to multisensory (auditory-visual) stimuli. Furthermore, building on prior animal work suggesting the involvement of the superior colliculus in this process, using probabilistic tractography we determined that the strongest cortical projection area connected with the superior colliculus includes the region of connectivity implicated in our independent whole-brain analysis.
doi:10.1016/j.neuroimage.2013.04.047
PMCID: PMC3672392  PMID: 23611862
diffusion tensor; cross-modal; bimodal; redundant target; redundant signals; superior colliculus
9.  Auditory rhythms are systemically associated with spatial frequency and density information in visual scenes 
Psychonomic bulletin & review  2013;20(4):740-746.
A variety of perceptual correspondences between auditory and visual features have been reported, but few studies have investigated how rhythm, an auditory feature defined purely by dynamics relevant to speech and music, interacts with visual features. Here, we demonstrate a novel crossmodal association between auditory rhythm and visual clutter. Participants were shown a variety of visual scenes from diverse categories and were asked to report the auditory rhythm that perceptually matched each scene by adjusting the rate of amplitude modulation (AM) of a sound. Participants matched each scene to a specific AM rate with surprising consistency. A spatial-frequency analysis showed that scenes with larger contrast energy in midrange spatial frequencies were matched to faster AM rates. Bandpass-filtering the scenes indicated that large contrast energy in this spatial-frequency range is associated with an abundance of object boundaries and contours, suggesting that participants matched more cluttered scenes to faster AM rates. Consistent with this hypothesis, AM-rate matches were strongly correlated with perceived clutter. Additional results indicate that both AM-rate matches and perceived clutter depend on object-based (cycles per object) rather than retinal (cycles per degree of visual angle) spatial frequency. Taken together, these results suggest a systematic crossmodal association between auditory rhythm, representing density in the temporal domain, and visual clutter, representing object-based density in the spatial domain. This association may allow the use of auditory rhythm to influence how visual clutter is perceived and attended.
doi:10.3758/s13423-013-0399-y
PMCID: PMC3706496  PMID: 23423817
Crossmodal; multisensory integration; spatial frequency; amplitude modulation rate; natural scenes; density; visual clutter
10.  Action enhances auditory but not visual temporal sensitivity 
Psychonomic bulletin & review  2013;20(1):108-114.
People naturally dance to music, and research has shown that rhythmic auditory stimuli facilitate production of precisely timed body movements. If motor mechanisms are closely linked to auditory temporal processing, just as auditory temporal processing facilitates movement production, producing action might reciprocally enhance auditory temporal sensitivity. We tested this novel hypothesis with a standard temporal-bisection paradigm, in which the slope of the temporal-bisection function provides a measure of temporal sensitivity. The bisection slope for auditory time perception was steeper when participants initiated each auditory stimulus sequence via a keypress than when they passively heard each sequence, demonstrating that initiating action enhances auditory temporal sensitivity. This enhancement is specific to the auditory modality, because voluntarily initiating each sequence did not enhance visual temporal sensitivity. A control experiment ruled out the possibility that tactile sensation associated with a keypress increased auditory temporal sensitivity. Taken together, these results demonstrate a unique reciprocal relationship between auditory time perception and motor mechanisms. As auditory perception facilitates precisely timed movements, generating action enhances auditory temporal sensitivity.
doi:10.3758/s13423-012-0330-y
PMCID: PMC3558542  PMID: 23090750
Action; Auditory temporal sensitivity; Visual temporal sensitivity
11.  Visual Attention Modulates Insight Versus Analytic Solving of Verbal Problems 
The journal of problem solving  2012;4(2):94-115.
Behavioral and neuroimaging findings indicate that distinct cognitive and neural processes underlie solving problems with sudden insight. Moreover, people with less focused attention sometimes perform better on tests of insight and creative problem solving. However, it remains unclear whether different states of attention, within individuals, influence the likelihood of solving problems with insight or with analysis. In this experiment, participants (N = 40) performed a baseline block of verbal problems, then performed one of two visual tasks, each emphasizing a distinct aspect of visual attention, followed by a second block of verbal problems to assess change in performance. After participants engaged in a center-focused flanker task requiring relatively focused visual attention, they reported solving more verbal problems with analytic processing. In contrast, after participants engaged in a rapid object identification task requiring attention to broad space and weak associations, they reported solving more verbal problems with insight. These results suggest that general attention mechanisms influence both visual attention task performance and verbal problem solving.
doi:10.7771/1932-6246.1127
PMCID: PMC3897204  PMID: 24459538
verbal problem solving; visual attention; insight; creativity; focused attention; broadened attention
12.  Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment 
PLoS ONE  2013;8(10):e77201.
Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.
doi:10.1371/journal.pone.0077201
PMCID: PMC3806747  PMID: 24194873
13.  Rapid volitional control of apparent motion during percept generation 
How rapidly can one voluntarily influence percept generation? The time course of voluntary visual-spatial attention is well studied, but the time course of intentional control over percept generation is relatively unknown. We investigated the latter using “one-shot” apparent motion. When a vertical or horizontal pair of squares is replaced by its 90° rotated version, the bottom-up signal is ambiguous. From this ambiguous signal, it is known that people can intentionally generate a percept of rotation in a desired direction (clockwise or counterclockwise). To determine the time course of this intentional control, we instructed participants to voluntarily induce rotation in a pre-cued direction (clockwise rotation when a high-pitched tone is heard and counter-clockwise rotation when a low-pitched tone is heard), and then to report the direction of rotation that was actually perceived. We varied the delay between the instructional cue and the rotated frame (cue-lead time) from 0 ms to 1067 ms. Intentional control became more effective with longer cue-lead times (asymptotically effective at 533 ms). Notably, intentional control was reliable even with a zero cue-lead time; control experiments ruled out response bias and the development of an auditory-visual association as explanations. This demonstrates that people can interpret an auditory cue and intentionally generate a desired motion percept surprisingly rapidly, entirely within the subjectively instantaneous moment in which the visual system constructs a percept of apparent motion.
doi:10.3758/s13414-013-0504-3
PMCID: PMC3800212  PMID: 23864265
intentional control; visual bistability; apparent motion; attentive tracking
14.  Flicker adaptation of low-level cortical visual neurons contributes to temporal dilation 
Several seconds of adaptation to a flickered stimulus causes a subsequent brief static stimulus to appear longer in duration. Non-sensory factors such as increased arousal and attention have been thought to mediate this flicker-based temporal-dilation aftereffect. Here we provide evidence that adaptation of low-level cortical visual neurons contributes to this aftereffect. The aftereffect was significantly reduced by a 45° change in Gabor orientation between adaptation and test. Because orientation-tuning bandwidths are smaller in lower-level cortical visual areas and are approximately 45° in human V1, the result suggests that flicker adaptation of orientation-tuned V1 neurons contributes to the temporal-dilation aftereffect. The aftereffect was abolished when the adaptor and test stimuli were presented to different eyes. Because eye preferences are strong in V1 but diminish in higher-level visual areas, the eye specificity of the aftereffect corroborates the involvement of low-level cortical visual neurons. Our results thus suggest that flicker adaptation of low-level cortical visual neurons contributes to expanding visual duration. Furthermore, this temporal-dilation aftereffect dissociates from the previously reported temporal-constriction aftereffect on the basis of the differences in their orientation and flicker-frequency selectivity, suggesting that the visual system possesses at least two distinct and potentially complementary mechanisms for adaptively coding perceived duration.
doi:10.1037/a0029495
PMCID: PMC3758686  PMID: 22866761
15.  Detecting and Categorizing Fleeting Emotions in Faces 
Emotion (Washington, D.C.)  2012;13(1):76-91.
Expressions of emotion are often brief, providing only fleeting images from which to base important social judgments. We sought to characterize the sensitivity and mechanisms of emotion detection and expression categorization when exposure to faces is very brief, and to determine whether these processes dissociate. Observers viewed 2 backward-masked facial expressions in quick succession, 1 neutral and the other emotional (happy, fearful, or angry), in a 2-interval forced-choice task. On each trial, observers attempted to detect the emotional expression (emotion detection) and to classify the expression (expression categorization). Above-chance emotion detection was possible with extremely brief exposures of 10 ms and was most accurate for happy expressions. We compared categorization among expressions using a d′ analysis, and found that categorization was usually above chance for angry versus happy and fearful versus happy, but consistently poor for fearful versus angry expressions. Fearful versus angry categorization was poor even when only negative emotions (fearful, angry, or disgusted) were used, suggesting that this categorization is poor independent of decision context. Inverting faces impaired angry versus happy categorization, but not emotion detection, suggesting that information from facial features is used differently for emotion detection and expression categorizations. Emotion detection often occurred without expression categorization, and expression categorization sometimes occurred without emotion detection. These results are consistent with the notion that emotion detection and expression categorization involve separate mechanisms.
doi:10.1037/a0029193
PMCID: PMC3758689  PMID: 22866885
emotion detection; expression categorization; face-inversion effect; awareness; face processing
16.  Suppressed semantic information accelerates analytic problem solving 
Psychonomic bulletin & review  2013;20(3):581-585.
The present study investigated the limits of semantic processing without awareness, during continuous flash suppression (CFS). We used compound remote associate word problems, in which three seemingly unrelated words (e.g., pine, crab, sauce) form a common compound with a single solution word (e.g., apple). During the first 3 s of each trial, the three problem words or three irrelevant words (control condition) were suppressed from awareness, using CFS. The words then became visible, and participants attempted to solve the word problem. Once the participants solved the problem, they indicated whether they had solved it by insight or analytically. Overall, the compound remote associate word problems were solved significantly faster after the problem words, as compared with irrelevant words, were presented during the suppression period. However this facilitation occurred only when people solved with analysis, not with insight. These results demonstrate that semantic processing, but not necessarily semantic integration, may occur without awareness.
doi:10.3758/s13423-012-0364-1
PMCID: PMC3746564  PMID: 23250762
Awareness; Continuous flash suppression; Semantic processing; Semantic integration; Binocular rivalry; Problem solving
17.  Sounds exaggerate visual shape 
Cognition  2012;124(2):194-200.
While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes to perception of 3D space, objects and faces. Hearing a /woo/ sound increases the apparent vertical elongation of a shape, whereas hearing a /wee/ sound increases the apparent horizontal elongation. We further demonstrate that these sounds influence aspect ratio coding. Viewing and adapting to a tall (or flat) shape makes a subsequently presented symmetric shape appear flat (or tall). These aspect ratio aftereffects are enhanced when associated speech sounds are presented during the adaptation period, suggesting that the sounds influence visual population coding of aspect ratio. Taken together, these results extend previous demonstrations that visual information constrains auditory perception by showing the converse – speech sounds influence visual perception of a basic geometric feature.
doi:10.1016/j.cognition.2012.04.009
PMCID: PMC3383334  PMID: 22633004
Auditory–visual; Aspect ratio; Crossmodal; Shape perception; Speech perception
18.  Interactive coding of visual spatial frequency and auditory amplitude-modulation rate 
Current Biology  2012;22(5):383-388.
doi:10.1016/j.cub.2012.01.004
PMCID: PMC3298604  PMID: 22326023
visual spatial frequency; auditory amplitude-modulation rate; auditory-visual interactions
19.  Local and global level-priming occurs for hierarchical stimuli composed of outlined, but not filled-in, elements 
Journal of Vision  2013;13(2):23.
When attention is directed to the local or global level of a hierarchical stimulus, attending to that same scale of information is subsequently facilitated. This effect is called level-priming, and in its pure form, it has been dissociated from stimulus- or response-repetition priming. In previous studies, pure level-priming has been demonstrated using hierarchical stimuli composed of alphanumeric forms consisting of lines. Here, we test whether pure level-priming extends to hierarchical configurations of generic geometric forms composed of elements that can be depicted either outlined or filled-in. Interestingly, whereas hierarchical stimuli composed of outlined elements benefited from pure level-priming, for both local and global targets, those composed of filled-in elements did not. The results are not readily attributable to differences in spatial frequency content, suggesting that forms composed of outlined and filled-in elements are treated differently by attention and/or priming mechanisms. Because our results present a surprising limit on attentional persistence to scale, we propose that other findings in the attention and priming literature be evaluated for their generalizability across a broad range of stimulus classes, including outlined and filled-in depictions.
doi:10.1167/13.2.23
PMCID: PMC3587389  PMID: 23420422
priming; local; global; attention; hierarchical stimuli
20.  Neural activity tied to reading predicts individual differences in extended-text comprehension 
Reading comprehension depends on neural processes supporting the access, understanding, and storage of words over time. Examinations of the neural activity correlated with reading have contributed to our understanding of reading comprehension, especially for the comprehension of sentences and short passages. However, the neural activity associated with comprehending an extended text is not well-understood. Here we describe a current-source-density (CSD) index that predicts individual differences in the comprehension of an extended text. The index is the difference in CSD-transformed event-related potentials (ERPs) to a target word between two conditions: a comprehension condition with words from a story presented in their original order, and a scrambled condition with the same words presented in a randomized order. In both conditions participants responded to the target word, and in the comprehension condition they also tried to follow the story in preparation for a comprehension test. We reasoned that the spatiotemporal pattern of difference-CSDs would reflect comprehension-related processes beyond word-level processing. We used a pattern-classification method to identify the component of the difference-CSDs that accurately (88%) discriminated good from poor comprehenders. The critical CSD index was focused at a frontal-midline scalp site, occurred 400–500 ms after target-word onset, and was strongly correlated with comprehension performance. Behavioral data indicated that group differences in effort or motor preparation could not explain these results. Further, our CSD index appears to be distinct from the well-known P300 and N400 components, and CSD transformation seems to be crucial for distinguishing good from poor comprehenders using our experimental paradigm. Once our CSD index is fully characterized, this neural signature of individual differences in extended-text comprehension may aid the diagnosis and remediation of reading comprehension deficits.
doi:10.3389/fnhum.2013.00655
PMCID: PMC3819048  PMID: 24223540
reading comprehension; EEG/ERP; machine learning applied to neuroscience; current source density; working memory
21.  Changes in auditory frequency guide visual-spatial attention 
Cognition  2011;121(1):133-139.
How do the characteristics of sounds influence the allocation of visual-spatial attention? Natural sounds typically change in frequency. Here we demonstrate that the direction of frequency change guides visual-spatial attention more strongly than the average or ending frequency, and provide evidence suggesting that this cross-modal effect may be mediated by perceptual experience. We used a Go/No-Go color-matching task to avoid response compatibility confounds. Participants performed the task either with their heads upright or tilted by 90°, misaligning the head-centered and environmental axes. The first of two colored circles was presented at fixation and the second was presented in one of four surrounding positions in a cardinal or diagonal direction. Either an ascending or descending auditory-frequency sweep was presented coincident with the first circle. Participants were instructed to respond to the color match between the two circles and to ignore the uninformative sounds. Ascending frequency sweeps facilitated performance (response time and/or sensitivity) when the second circle was presented at the cardinal top position and descending sweeps facilitated performance when the second circle was presented at the cardinal bottom position; there were no effects of the average or ending frequency. The sweeps had no effects when circles were presented at diagonal locations, and head tilt entirely eliminated the effect. Thus, visual-spatial cueing by pitch change is narrowly tuned to vertical directions and dominates any effect of average or ending frequency. Because this cross-modal cueing is dependent on the alignment of head-centered and environmental axes, it may develop through associative learning during waking upright experience.
doi:10.1016/j.cognition.2011.06.003
PMCID: PMC3149771  PMID: 21741633
cross-modal perception; auditory-visual interactions; visual-spatial attention; implicit attentional processing; multi-modal cognition
22.  Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures 
Psychological Science  2011;22(7):943-950.
Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding.
doi:10.1177/0956797611413292
PMCID: PMC3261759  PMID: 21690314
awareness; pattern adaptation; visual perception
23.  Object-based auditory facilitation of visual search for pictures and words with frequent and rare targets 
Acta psychologica  2010;137(2):252-259.
Auditory and visual processes demonstrably enhance each other based on spatial and temporal coincidence. Our recent results on visual search have shown that auditory signals also enhance visual salience of specific objects based on multimodal experience. For example, we tend to see an object (e.g., a cat) and simultaneously hear its characteristic sound (e.g., “meow”), to name an object when we see it, and to vocalize a word when we read it, but we do not tend to see a word (e.g., cat) and simultaneously hear the characteristic sound (e.g., “meow”) of the named object. If auditory-visual enhancements occur based on this pattern of experiential associations, playing a characteristic sound (e.g., “meow”) should facilitate visual search for the corresponding object (e.g., an image of a cat), hearing a name should facilitate visual search for both the corresponding object and corresponding word, but playing a characteristic sound should not facilitate visual search for the name of the corresponding object. Our present and prior results together confirmed these experiential-association predictions. We also recently showed that the underlying object-based auditory-visual interactions occur rapidly (within 220 ms) and guide initial saccades towards target objects. If object-based auditory-visual enhancements are automatic and persistent, an interesting application would be to use characteristic sounds to facilitate visual search when targets are rare, such as during baggage screening. Our participants searched for a gun among other objects when a gun was presented on only 10% of the trials. The search time was speeded when a gun sound was played on every trial (primarily on gun-absent trials); importantly, playing gun sounds facilitated both gun-present and gun-absent responses, suggesting that object-based auditory-visual enhancements persistently increase the detectability of guns rather than simply biasing gun-present responses. Thus, object-based auditory-visual interactions that derive from experiential associations rapidly and persistently increase visual salience of corresponding objects.
doi:10.1016/j.actpsy.2010.07.017
PMCID: PMC3010345  PMID: 20864070
24.  Laughter exaggerates happy and sad faces depending on visual context 
Psychonomic bulletin & review  2012;19(2):163-169.
Laughter is an auditory stimulus that powerfully conveys positive emotion. We investigated how laughter influenced visual perception of facial expressions. We simultaneously presented laughter with a happy, neutral, or sad schematic face. The emotional face was briefly presented either alone or among a crowd of neutral faces. We used a matching method to determine how laughter influenced the perceived intensity of happy, neutral, and sad expressions. For a single face, laughter increased the perceived intensity of a happy expression. Surprisingly, for a crowd of faces laughter produced an opposite effect, increasing the perceived intensity of a sad expression in a crowd. A follow-up experiment revealed that this contrast effect may have occurred because laughter made the neutral distracter faces appear slightly happy, thereby making the deviant sad expression stand out in contrast. A control experiment ruled out semantic mediation of the laughter effects. Our demonstration of the strong context dependence of laughter effects on facial expression perception encourages a re-examination of the previously demonstrated effects of prosody, speech content, and mood on face perception, as they may similarly be context dependent.
doi:10.3758/s13423-011-0198-2
PMCID: PMC3307857  PMID: 22215467
Crossmodal interaction; emotion; facial expressions; laughter
25.  Differential Roles of Frequency-following and Frequency-doubling Visual Responses Revealed by Evoked Neural Harmonics 
Journal of Cognitive Neuroscience  2010;23(8):1875-1886.
Frequency-following and frequency-doubling neurons are ubiquitous in both striate and extrastriate visual areas. However, responses from these two types of neural populations have not been effectively compared in humans because previous EEG studies have not successfully dissociated responses from these populations. We devised a light–dark flicker stimulus that unambiguously distinguished these responses as reflected in the first and second harmonics in the steady-state visual evoked potentials. These harmonics revealed the spatial and functional segregation of frequency-following (the first harmonic) and frequency-doubling (the second harmonic) neural populations. Spatially, the first and second harmonics in steady-state visual evoked potentials exhibited divergent posterior scalp topographies for a broad range of EEG frequencies. The scalp maximum was medial for the first harmonic and contralateral for the second harmonic, a divergence not attributable to absolute response frequency. Functionally, voluntary visual–spatial attention strongly modulated the second harmonic but had negligible effects on the simultaneously elicited first harmonic. These dissociations suggest an intriguing possibility that frequency-following and frequency-doubling neural populations may contribute complementary functions to resolve the conflicting demands of attentional enhancement and signal fidelity—the frequency-doubling population may mediate substantial top–down signal modulation for attentional selection, whereas the frequency-following population may simultaneously preserve relatively undistorted sensory qualities regardless of the observer’s cognitive state.
doi:10.1162/jocn.2010.21536
PMCID: PMC3278072  PMID: 20684661

Results 1-25 (34)