PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (126)
 

Clipboard (0)
None

Select a Filter Below

Journals
Year of Publication
more »
1.  Motivational influences on response inhibition measures 
Psychological research has placed great emphasis on inhibitory control due to its integral role in normal cognition and clinical disorders. The stop-signal task, in conjunction with the stop-signal reaction time (SSRT) measure, provides a well-established paradigm for measuring motor response inhibition. However, the influence of motivation and strategic decision-making on stop-signal performance and SSRT has not been examined. In the present study, we conceptualize the stop-signal paradigm as a decision-making task involving the tradeoff between fast responding and accurate inhibition. In four experiments, we demonstrate that this performance tradeoff is influenced by inherent motivational biases as well as explicit strategic control, resulting in systematic differences in conventional measures of inhibitory ability, such as SSRT. Within subjects, we found that SSRT was lower when participants favored correct stopping over fast responding, and was higher when participants favored fast responding over correct stopping. We present a novel variant of the stop-signal task that uses a monetary incentive structure to manipulate motivated speed-accuracy tradeoffs. By sampling performance at multiple tradeoff settings, we obtain a measure of inhibitory ability that is not confounded with motivational or strategic bias, and thus, more easily interpretable when comparing across participants. We present a working theoretical model to explain the effects of motivational context on response inhibition.
doi:10.1037/a0016802
PMCID: PMC3983778  PMID: 20364928
inhibitory control; SSRT; speed-accuracy tradeoffs
2.  The Attentional Effects of Single Cues and Color Singletons on Visual Sensitivity 
Sudden changes in the visual periphery can automatically draw attention to their locations. For example, the brief flash of a single object (a “cue”) rapidly enhances contrast sensitivity for subsequent stimuli in its vicinity. Feature singletons (e.g., a red circle among green circles) can also capture attention in a variety of tasks. Here, we evaluate whether a peripheral cue that enhances contrast sensitivity when it appears alone has a similar effect when it appears as a color singleton, with the same stimuli and task. In four experiments we asked observers to report the orientation of a target Gabor stimulus, which was preceded by an uninformative cue array consisting either of a single disk or of 16 disks containing a color or luminance singleton. Accuracy was higher and contrast thresholds lower when the single cue appeared at or near the target’s location, compared with farther away. The color singleton also modulated performance but to a lesser degree and only when it appeared exactly at the target’s location. Thus, this is the first study to demonstrate that cueing by color singletons, like single cues, can enhance sensory signals at an early stage of processing.
doi:10.1037/a0033775
PMCID: PMC3899109  PMID: 23875570
vision; attention; attentional capture; single cues; color singletons; feature contrast; contrast sensitivity
3.  Lexically Guided Phonetic Retuning of Foreign-Accented Speech and Its Generalization 
Listeners use lexical knowledge to retune phoneme categories. When hearing an ambiguous sound between /s/ and /f/ in lexically unambiguous contexts such as gira[s/f], listeners learn to interpret the sound as /f/ because gira[f] is a real word and gira[s] is not. Later, they apply this learning even in lexically ambiguous contexts (perceiving knife rather than nice). Although such retuning could help listeners adapt to foreign-accented speech, research has focused on single phonetic contrasts artificially manipulated to create ambiguous sounds; however, accented speech varies along many dimensions. It is therefore unclear whether analogies to adaptation to accented speech are warranted. In the present studies, the to-be-adapted ambiguous sound was embedded in a global foreign accent. In addition, conditions of cross-speaker generalization were tested with focus on the extent to which perceptual similarity between 2 speakers’ fricatives is a condition for generalization to occur. Results showed that listeners retune phoneme categories manipulated within the context of a global foreign accent, and that they generalize this short-term learning to the perception of phonemes from previously unheard speakers. However, generalization was observed only when exposure and test speakers’ fricatives were sampled across a similar perceptual space.
doi:10.1037/a0034409
PMCID: PMC3962813  PMID: 24059846
speech perception; foreign accent; perceptual learning; lexically-guided phonetic category retuning; speaker generalization
4.  Parafoveal-foveal Overlap Can Facilitate Ongoing Word Identification During Reading: Evidence from Eye Movements 
Readers continuously receive parafoveal information about the upcoming word in addition to the foveal information about the currently fixated word. Previous research (Inhoff, Radach, Starr, & Greenberg, 2000) showed that the presence of a parafoveal word which was similar to the foveal word facilitated processing of the foveal word. In three experiments, we used the gaze-contingent boundary paradigm (Rayner, 1975) to manipulate the parafoveal information that subjects received before or while fixating a target word (e.g. news) within a sentence. Specifically a reader’s parafovea could contain a repetition of the target (news), a correct preview of the post-target word (once), an unrelated word (warm), random letters (cxmr), a nonword neighbor of the target (niws), a semantically related word (tale), or a nonword neighbor of that word (tule). Target fixation times were significantly lower in the parafoveal repetition condition than in all other conditions, suggesting that foveal processing can be facilitated by parafoveal repetition. We present a simple model framework that can account for these effects.
doi:10.1037/a0029492
PMCID: PMC3596446  PMID: 22866764
5.  On the anisotropy of perceived ground extents and the interpretation of walked distance as a measure of perception 
Two experiments are reported concerning the perception of ground extent in order to discover whether prior reports of anisotropy between frontal extents and extents in depth were consistent across different measures (visual matching and pantomime walking) and test environments (outdoor environments and virtual environments). In Experiment 1 it was found that depth extents of up to 7 m are indeed perceptually compressed relative to frontal extents in an outdoor environment, and that perceptual matching provided more precise estimates than did pantomime walking. In Experiment 2, similar anisotropies were found using similar tasks in a similar (but virtual) environment. In both experiments pantomime walking measures seemed to additionally compress the range of responses. Experiment 3 supported the hypothesis that range compression in walking measures of perceived distance might be due to proactive interference (memory contamination). It is concluded that walking measures are calibrated for perceived egocentric distance, but that pantomime walking measures may suffer range compression. Depth extents along the ground are perceptually compressed relative to frontal ground extents in a manner consistent with the angular scale expansion hypothesis.
doi:10.1037/a0029405
PMCID: PMC3600084  PMID: 22889186
Spatial perception; path integration; distance perception; leaky integration; angular scale expansion
6.  The development of individuation in autism 
Evidence suggests that people with autism use holistic information differently than typical adults. The current studies examine this possibility by investigating how core visual processes that contribute to holistic processing – individuation and element grouping – develop in participants with autism and typically developing (TD) participants matched for age, IQ and gender. Individuation refers to the ability to `see' up to 4 elements simultaneously; grouping these elements can change the number of elements that are rapidly apprehended. We examined these core processes using two well-established paradigms, rapid enumeration and multiple object tracking (MOT). In both tasks, a performance limit of about 4 elements in adulthood is thought to reflect individuation capacity. Participants with autism has a smaller individuation capacity than TD controls, regardless of whether they were enumerating static elements or tracking moving ones. To manipulate holistic information and individuation performance, we grouped the elements into a design or had elements move together. Participants with autism were affected to a similar degree as TD participants by the holistic information, whether the manipulation helped or hurt performance, consistent with evidence that some types of gestalt/grouping information are processed typically in autism. There was substantial development in autism from childhood to adolescence, but not from adolescence to adulthood, a pattern distinct from TD participants. These results provide important information about core visual processes in autism, as well as insight into the architecture of vision (e.g., individuation appears distinct from visual strengths in autism, such as visual search, despite similarities).
doi:10.1037/a0029400
PMCID: PMC3608798  PMID: 22963232
autism; subitizing; global-local; individuation; holistic; configural; grouping; gestalt; indexing
7.  When does repeated search in scenes involve memory? Looking AT versus looking FOR objects in scenes 
One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in the same scene, search performance remained essentially unchanged over the course of searches despite increasing scene familiarity. Similarly, looking at target objects during previews, which included letter search, 30 seconds of free viewing, or even 30 seconds of memorizing a scene, also did not benefit search for the same objects later on. However, when the same object was searched for again memory for the previous search was capable of producing very substantial speeding of search despite many different intervening searches. This was especially the case when the previous search engagement had been active rather than supported by a cue. While these search benefits speak to the strength of memory-guided search when the same search target is repeated, the lack of memory guidance during initial object searches – despite previous encounters with the target objects - demonstrates the dominance of guidance by generic scene knowledge in real-world search.
doi:10.1037/a0024147
PMCID: PMC3969238  PMID: 21688939
8.  Information-limited parallel processing in difficult heterogeneous covert visual search 
Difficult visual search is often attributed to time-limited serial attention operations, although neural computations in the early visual system are parallel. Using probabilistic search models (Dosher, Han, & Lu, 2004) and a full time-course analysis of the dynamics of covert visual search, we distinguish unlimited capacity parallel versus serial search mechanisms. Performance is measured for difficult and error-prone searches among heterogeneous background elements and for easy and accurate searches among homogeneous background elements. Contrary to the claims of time-limited serial attention, searches in heterogeneous backgrounds instead exhibited nearly identical search dynamics for display sizes up to 12 items. A review and new analyses indicate that most difficult as well as easy visual searches operate as an unlimited-capacity parallel analysis over the visual field within a single eye fixation, which suggests limitations in the availability of information, not temporal bottlenecks in analysis or comparison. Serial properties likely reflect overt attention expressed in eye movements.
doi:10.1037/a0020366
PMCID: PMC3929106  PMID: 20873936
Visual attention; Serial and parallel processing architecture; Distracter Homogeneity
9.  Perceptual load corresponds with factors known to influence visual search 
One account of the early versus late selection debate in attention proposes that perceptual load determines the locus of selection. Attention selects stimuli at a late processing level under low-load conditions but selects stimuli at an early level under high-load conditions. Despite the successes of perceptual load theory, a non-circular definition of perceptual load remains elusive. We investigated the factors that influence perceptual load by using manipulations that have been studied extensively in visual search, namely target-distractor similarity and distractor-distractor similarity. Consistent with previous work, search was most efficient when targets and distractors were dissimilar and the displays contained homogeneous distractors; search became less efficient when target-distractor similarity increased irrespective of display heterogeneity. Importantly, we used these same stimuli in a typical perceptual load task that measured attentional spill-over to a task-irrelevant flanker. We found a strong correspondence between search efficiency and perceptual load; stimuli that generated efficient searches produced flanker interference effects, suggesting that such displays involved low perceptual load. Flanker interference effects were reduced in displays that produced less efficient searches. Furthermore, our results demonstrate that search difficulty, as measured by search intercept, has little bearing on perceptual load. These results suggest that perceptual load might be defined in part by well-characterized, continuous factors that influence visual search.
doi:10.1037/a0031616
PMCID: PMC3928141  PMID: 23398258
perceptual load; selective attention; visual search; search efficiency; search difficulty
10.  Object-based attention overrides perceptual load to modulate visual distraction 
The ability to ignore task-irrelevant information and overcome distraction is central to our ability to efficiently carry out a number of tasks. One factor shown to strongly influence distraction is the perceptual load of the task being performed; as the perceptual load of task-relevant information processing increases, the likelihood that task-irrelevant information will be processed and interfere with task performance decreases. However, it has also been demonstrated that other attentional factors play an important role in whether or not distracting information affects performance. Specifically, object-based attention can modulate the extent of distractor processing, leaving open the possibility that object-based attention mechanisms may directly modulate the way in which perceptual load affects distractor processing. Here, we show that object-based attention dominates perceptual load to determine the extent of task-irrelevant information processing, with distractors affecting performance only when they are contained within the same object as the task-relevant search display. These results suggest that object-based attention effects play a central role in selective attention regardless of the perceptual load of the task being performed.
doi:10.1037/a0027406
PMCID: PMC3924541  PMID: 22390296
Selective Attention; Perceptual Load; Object-based Attention; Visual Attention; Perception; Perceptual Grouping
11.  Context-dependent control over attentional capture 
A number of studies have demonstrated that the likelihood of a salient item capturing attention is dependent on the “attentional set” an individual employs in a given situation. The instantiation of an attentional set is often viewed as a strategic, voluntary process, relying on working memory systems that represent immediate task priorities. However, influential theories of attention and automaticity propose that goal-directed control can operate more or less automatically on the basis of longer-term task representations, a notion supported by a number of recent studies. Here, we provide evidence that longer-term contextual learning can rapidly and automatically influence the instantiation of a given attentional set. Observers learned associations between specific attentional sets and specific task-irrelevant background scenes during a training session, and in the ensuing test session simply reinstating particular scenes on a trial by trial basis biased observers to employ the associated attentional set. This directly influenced the magnitude of attentional capture, suggesting that memory for the context in which a task is performed can play an important role in the ability to instantiate a particular attentional set and overcome distraction by salient, task-irrelevant information.
doi:10.1037/a0030027
PMCID: PMC3924559  PMID: 23025581
Visual Attention; Selective Attention; Learning; Memory; Attentional Capture; Scenes; Context
12.  Establishment of an attentional set via statistical learning 
The ability overcome attentional capture and attend goal-relevant information is typically viewed as a volitional, effortful process that relies on the maintenance of current task priorities or “attentional sets” in working memory. However, the visual system possesses statistical learning mechanisms that can incidentally encode probabilistic associations between goal-relevant objects and the attributes likely to define them. Thus, it is possible that statistical learning may contribute to the establishment of a given attentional set and modulate the effects of attentional capture. Here we provide evidence for such a mechanism, showing that implicitly learned associations between a search target and its likely color directly influence the ability of a salient color precue to capture attention in a classic attentional capture task. This indicates a novel role for statistical learning in the modulation of attentional capture, and emphasizes the role that this learning may play in goal-directed attentional control more generally.
doi:10.1037/a0034489
PMCID: PMC3914310  PMID: 24099589
Visual Attention; Attentional Capture; Statistical Learning
13.  Incidental and context-responsive activation of structure- and function-based action features during object identification 
Previous studies suggest that action representations are activated during object processing, even when task-irrelevant. In addition, there is evidence that lexical-semantic context may affect such activation during object processing. Finally, prior work from our laboratory and others indicates that function-based (“use”) and structure-based (“move”) action subtypes may differ in their activation characteristics. Most studies assessing such effects, however, have required manual object-relevant motor responses, thereby plausibly influencing the activation of action representations. The present work utilizes eyetracking and a Visual World Paradigm task without object-relevant actions to assess the time course of activation of action representations, as well as their responsiveness to lexical-semantic context. In two experiments, participants heard a target word and selected its referent from an array of four objects. Gaze fixations on non-target objects signal activation of features shared between targets and non-targets. The experiments assessed activation of structure-based (Experiment 1) or function-based (Experiment 2) distractors, using neutral sentences (“S/he saw the …”) or sentences with a relevant action verb (Experiment 1: “S/he picked up the……”; Experiment 2: “S/he used the….”). We observed task-irrelevant activations of action information in both experiments. In neutral contexts, structure-based activation was relatively faster-rising but more transient than function-based activation. Additionally, action verb contexts reliably modified patterns of activation in both Experiments. These data provide fine-grained information about the dynamics of activation of function-based and structure-based actions in neutral and action-relevant contexts, in support of the “Two Action System” model of object and action processing (e.g., Buxbaum & Kalénine, 2010).
doi:10.1037/a0027533
PMCID: PMC3371276  PMID: 22390294
Two Action Systems hypothesis; action representations; object concepts; context; eye tracking; object use; object grasping
14.  Rapid acquisition but slow extinction of an attentional bias in space 
Substantial research has focused on the allocation of spatial attention based on goals or perceptual salience. In everyday life, however, people also direct attention using their previous experience. Here we investigate the pace at which people incidentally learn to prioritize specific locations. Participants searched for a T among Ls in a visual search task. Unbeknownst to them, the target was more often located in one region of the screen than in other regions. An attentional bias toward the rich region developed over dozens of trials. However, the bias did not rapidly readjust to new contexts. It persisted for at least a week and for hundreds of trials after the target’s position became evenly distributed. The persistence of the bias did not reflect a long window over which visual statistics were calculated. Long-term persistence differentiates incidentally learned attentional biases from the more flexible goal-driven attention.
doi:10.1037/a0027611
PMCID: PMC3382032  PMID: 22428675
spatial attention; statistical learning; experience-driven attention
15.  Guidance of spatial attention by incidental learning and endogenous cuing 
Our visual system is highly sensitive to regularities in the environment. Locations that were important in one’s previous experience are often prioritized during search, even though observers may not be aware of the learning. In this study we characterized the guidance of spatial attention by incidental learning of a target’s spatial probability, and examined the interaction between endogenous cuing and probability cuing. Participants searched for a target (T) among distractors (L’s). The target was more often located in one region of the screen than in others. We found that search RT was faster when the target appeared in the high-frequency region rather than the low-frequency regions. This difference increased when there were more items on the display, suggesting that probability cuing guides spatial attention. Additional data indicated that on their own, probability cuing and endogenous cuing (e.g., a central arrow that predicted a target’s location) were similarly effective at guiding attention. However, when both cues were presented at once, probability cuing was largely eliminated. Thus, although both incidental learning and endogenous cuing can effectively guide attention, endogenous cuing takes precedence over incidental learning.
doi:10.1037/a0028022
PMCID: PMC3431435  PMID: 22506784
spatial attention; incidental learning; endogenous attention; visual search
16.  The social psychology of perception experiments: Hills, backpacks, glucose and the problem of generalizability 
Experiments take place in a physical environment but also a social environment. Generalizability from experimental manipulations to more typical contexts may be limited by violations of ecological validity with respect to either the physical or the social environment. A replication and extension of a recent study (a blood glucose manipulation) was conducted to investigate the effects of experimental demand (a social artifact) on participant behaviors judging the geographical slant of a large-scale outdoor hill. Three different assessments of experimental demand indicate that even when the physical environment is naturalistic, and the goal of the main experimental manipulation was primarily concealed, artificial aspects of the social environment (such as an explicit requirement to wear a heavy backpack while estimating the slant of a hill) may still be primarily responsible for altered judgments of hill orientation.
doi:10.1037/a0027805
PMCID: PMC3445748  PMID: 22428672
17.  Task Specificity and the Influence of Memory on Visual Search: Commentary on Võ and Wolfe (2012) 
Recent results from Võ and Wolfe (2012b) suggest that the application of memory to visual search may be task specific: Previous experience searching for an object facilitated later search for that object, but object information acquired during a different task did not appear to transfer to search. The latter inference depended on evidence that a preview task did not improve later search, but Võ and Wolfe used a relatively insensitive, between-subjects design. Here, we replicated the Võ and Wolfe study using a within-subject manipulation of scene preview. A preview session (focused either on object location memory or on the assessment of object semantics) reliably facilitated later search. In addition, information acquired from distractors in a scene facilitated search when the distractor later became the target. Instead of being strongly constrained by task, visual memory is applied flexibly to guide attention and gaze during visual search.
doi:10.1037/a0030237
PMCID: PMC3515208  PMID: 23205947
18.  Spatial attention modulates the precedence effect 
Communication and navigation in real environments rely heavily on the ability to distinguish objects in acoustic space. However, auditory spatial information is often corrupted by conflicting cues and noise such as acoustic reflections. Fortunately the brain can apply mechanisms at multiple levels to emphasize target information and mitigate such interference. In a rapid phenomenon known as the precedence effect, reflections are perceptually fused with the veridical primary sound. The brain can also use spatial attention to highlight a target sound at the expense of distracters. Although attention has been shown to modulate many auditory perceptual phenomena, rarely does it alter how acoustic energy is first parsed into objects, as with the precedence effect. This brief report suggests that both endogenous (voluntary) and exogenous (stimulus-driven) spatial attention have a profound influence on the precedence effect depending on where they are oriented. Moreover, we observed that both types of attention could enhance perceptual fusion while only exogenous attention could hinder it. These results demonstrate that attention, by altering how auditory objects are formed, guides the basic perceptual organization of our acoustic environment.
doi:10.1037/a0028348
PMCID: PMC3437381  PMID: 22545599
precedence effect; cross-modal attention; spatial attention
19.  Templates for rejection: Configuring attention to ignore task-irrelevant features 
Theories of attention are compatible with the idea that we can bias attention to avoid selecting objects that have known nontarget features. Although this may underlie several existing phenomena, the explicit guidance of attention away from known nontargets has yet to be demonstrated. Here we show that observers can use feature cues (i.e., color) to bias attention away from nontarget items during visual search. These negative cues were used to quickly instantiate a template for rejection that reliably facilitated search across the cue-to-search stimulus onset asynchronies (SOAs), although negative cues were not as potent as cues that guide attention toward target features. Furthermore, by varying the search set size we show a template for rejection is increasingly effective in facilitating search as scene complexity increases. Our findings demonstrate that knowing what not to look for can be used to configure attention to avoid certain features, complimenting what is known about setting attention to select certain target features.
doi:10.1037/a0027885
PMCID: PMC3817824  PMID: 22468723
20.  Individual Differences in the Multisensory Temporal Binding Window Predict Susceptibility to Audiovisual Illusions 
Human multisensory systems are known to bind inputs from the different sensory modalities into a unified percept, a process that leads to measurable behavioral benefits. This integrative process can be observed through multisensory illusions, including the McGurk effect and the sound-induced flash illusion, both of which demonstrate the ability of one sensory modality to modulate perception in a second modality. Such multisensory integration is highly dependent upon the temporal relationship of the different sensory inputs, with perceptual binding occurring within a limited range of asynchronies known as the temporal binding window (TBW). Previous studies have shown that this window is highly variable across individuals, but it is unclear how these variations in the TBW relate to an individual’s ability to integrate multisensory cues. Here we provide evidence linking individual differences in multisensory temporal processes to differences in the individual’s audiovisual integration of illusory stimuli. Our data provide strong evidence that the temporal processing of multiple sensory signals and the merging of multiple signals into a single, unified perception, are highly related. Specifically, the width of right side of an individuals’ TBW, where the auditory stimulus follows the visual, is significantly correlated with the strength of illusory percepts, as indexed via both an increase in the strength of binding synchronous sensory signals and in an improvement in correctly dissociating asynchronous signals. These findings are discussed in terms of their possible neurobiological basis, relevance to the development of sensory integration, and possible importance for clinical conditions in which there is growing evidence that multisensory integration is compromised.
doi:10.1037/a0027339
PMCID: PMC3795069  PMID: 22390292
multisensory integration; cross-modal; McGurk; sound-induced flash illusion; perception; temporal processing
21.  Flicker adaptation of low-level cortical visual neurons contributes to temporal dilation 
Several seconds of adaptation to a flickered stimulus causes a subsequent brief static stimulus to appear longer in duration. Non-sensory factors such as increased arousal and attention have been thought to mediate this flicker-based temporal-dilation aftereffect. Here we provide evidence that adaptation of low-level cortical visual neurons contributes to this aftereffect. The aftereffect was significantly reduced by a 45° change in Gabor orientation between adaptation and test. Because orientation-tuning bandwidths are smaller in lower-level cortical visual areas and are approximately 45° in human V1, the result suggests that flicker adaptation of orientation-tuned V1 neurons contributes to the temporal-dilation aftereffect. The aftereffect was abolished when the adaptor and test stimuli were presented to different eyes. Because eye preferences are strong in V1 but diminish in higher-level visual areas, the eye specificity of the aftereffect corroborates the involvement of low-level cortical visual neurons. Our results thus suggest that flicker adaptation of low-level cortical visual neurons contributes to expanding visual duration. Furthermore, this temporal-dilation aftereffect dissociates from the previously reported temporal-constriction aftereffect on the basis of the differences in their orientation and flicker-frequency selectivity, suggesting that the visual system possesses at least two distinct and potentially complementary mechanisms for adaptively coding perceived duration.
doi:10.1037/a0029495
PMCID: PMC3758686  PMID: 22866761
22.  It feels like it’s me: interpersonal multisensory stimulation enhances visual remapping of touch from other to self 
Understanding other people’s feelings in social interactions depends on the ability to map onto our body the sensory experiences we observed on other people’s bodies. It has been shown that the perception of tactile stimuli on the face is improved when concurrently viewing a face being touched. This Visual Remapping of Touch (VRT) is enhanced the more similar others are perceived to be to the self, and is strongest when viewing one’s face. Here, we ask whether altering self-other boundaries can in turn change the VRT effect. We used the enfacement illusion, which relies on synchronous interpersonal multisensory stimulation (IMS), to manipulate self-other boundaries. Following synchronous, but not asynchronous, IMS, the self-related enhancement of the VRT extended to the other individual. These findings suggest that shared multisensory experiences represent one key way to overcome the boundaries between self and others, as evidenced by changes in somatosensory processing of tactile stimuli on one’s own face when concurrently viewing another person’s face being touched.
doi:10.1037/a0031049
PMCID: PMC3750640  PMID: 23276110
Multisensory Interaction; Visual Remapping of Touch; Interpersonal Multisensory Stimulation; Self-recognition; Enfacement illusion
23.  Feature Assignment in Perception of Auditory Figure 
Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory “objects” (relatively punctate events, such as a dog's bark) and auditory “streams” (sounds involving a pattern over time, such as a galloping rhythm). In Experiments 1 and 2, on each trial two sounds -- an object (a vowel) and a stream (a series of tones) – were presented with one target feature that could be perceptually grouped with either source. In each block of these experiments, listeners were required to attend to one of the two sounds, and report its perceived category. Across several experimental manipulations, listeners were more likely to allocate the feature to an impoverished object if the result of the grouping was a good, identifiable object. Perception of objects was quite sensitive to feature variation (noise masking), whereas perception of streams was more robust to feature variation. In Experiment 3, the number of sound sources competing for the feature was increased to three. This produced a shift toward relying more on spatial cues than on the potential contribution of the feature to an object's perceptual quality. The results support a distinction between auditory objects and streams, and provide new information about the way that the auditory world is parsed.
doi:10.1037/a0026789
PMCID: PMC3342414  PMID: 22288691
Auditory Figure; Auditory Scene Analysis; Auditory Perceptual Organization
24.  Dissociable Roles of Different Types of Working Memory Load in Visual Detection 
We contrasted the effects of different types of working memory (WM) load on detection. Considering the sensory-recruitment hypothesis of visual short-term memory (VSTM) within load theory (e.g., Lavie, 2010) led us to predict that VSTM load would reduce visual-representation capacity, thus leading to reduced detection sensitivity during maintenance, whereas load on WM cognitive control processes would reduce priority-based control, thus leading to enhanced detection sensitivity for a low-priority stimulus. During the retention interval of a WM task, participants performed a visual-search task while also asked to detect a masked stimulus in the periphery. Loading WM cognitive control processes (with the demand to maintain a random digit order [vs. fixed in conditions of low load]) led to enhanced detection sensitivity. In contrast, loading VSTM (with the demand to maintain the color and positions of six squares [vs. one in conditions of low load]) reduced detection sensitivity, an effect comparable with that found for manipulating perceptual load in the search task. The results confirmed our predictions and established a new functional dissociation between the roles of different types of WM load in the fundamental visual perception process of detection.
doi:10.1037/a0033037
PMCID: PMC3725889  PMID: 23713796
visual working memory; executive cognitive control; selective attention; perceptual load; visual detection
25.  Compensation for coarticulation: Disentangling auditory and gestural theories of perception of coarticulatory effects in speech 
According to one approach to speech perception, listeners perceive speech by applying general pattern matching mechanisms to the acoustic signal (e.g., Diehl, Lotto & Holt, 2004). An alternative is that listeners perceive the phonetic gestures that structured the acoustic signal (e.g., Fowler, 1986). The two accounts have offered different explanations for the phenomenon of compensation for coarticulation (CfC). An example of CfC is that if a speaker produces a gesture with a front place of articulation, it may be pulled slightly backwards if it follows a back place of articulation, and listeners’ category boundaries shift (compensate) accordingly. The gestural account appeals to direct attunement to coarticulation to explain CfC, while the auditory account explains it by spectral contrast. In previous studies, spectral contrast and gestural consequences of coarticulation have been correlated, such that both accounts made identical predictions. We identify a liquid context in Tamil that disentangles contrast and coarticulation, such that the two accounts make different predictions. In a standard CfC task in Experiment 1, gestural coarticulation rather than spectral contrast determined the direction of CfC. Experiments 2, 3 and 4 demonstrated that tone analogues of the speech precursors failed to produce the same effects observed in Experiment 1, suggesting that simple spectral contrast cannot account for the findings of Experiment 1.
doi:10.1037/a0018391
PMCID: PMC3698240  PMID: 20695714
Compensation for coarticulation; speech perception; direct realism; articulatory; Tamil

Results 1-25 (126)