The present study combines the object-reviewing paradigm (Kahneman, Treisman, & Gibbs, 1990) with the checkershadow illusion (Adelson, 1995) in order to contrast the effects of objects’ luminance versus lightness on the object-specific preview benefit. To this end, we manipulated objects’ luminance and the amount of illumination given by an informative background scene in four experiments. In line with previous studies (Moore, Stephens, & Hein, 2010), there was no object-specific preview benefit when objects were presented on a uniformly colored background and luminance switched between objects. In contrast, when objects were presented on the checkershadow illusion background which provided an explanation for the luminance switch, a reliable object-specific preview benefit was observed. This suggests that object correspondence as measured by the object-reviewing paradigm can be influenced by scene-induced, perceived lightness of objects’ surfaces. We replicated this finding and moreover showed that the scene context only influences the object-specific preview benefit if the objects are perceived as part of the background scene.
illumination frame of reference; luminance; lightness; object updating; object-reviewing paradigm
Speech perception depends on long-term representations that reflect regularities of the native language. However, listeners rapidly adapt when speech acoustics deviate from these regularities due to talker idiosyncrasies such as foreign accents and dialects. To better understand these dual aspects of speech perception, we probe native English listeners’ baseline perceptual weighting of two acoustic dimensions (spectral quality and vowel duration) towards vowel categorization and examine how they subsequently adapt to an “artificial accent” that deviates from English norms in the correlation between the two dimensions. At baseline, listeners rely relatively more on spectral quality than vowel duration to signal vowel category, but duration nonetheless contributes. Upon encountering an “artificial accent” in which the spectral-duration correlation is perturbed relative to English language norms, listeners rapidly down-weight reliance on duration. Listeners exhibit this type of short-term statistical learning even in the context of nonwords, confirming that lexical information is not necessary to this form of adaptive plasticity in speech perception. Moreover, learning generalizes to both novel lexical contexts and acoustically-distinct altered voices. These findings are discussed in the context of a mechanistic proposal for how supervised learning may contribute to this type of adaptive plasticity in speech perception.
Adults learning a new language are faced with a significant challenge: non-native speech sounds that are perceptually similar to sounds in one’s native language can be very difficult to acquire. Sleep and native language interference, two factors that may help to explain this difficulty in acquisition, are addressed in three studies. Results of Experiment 1 showed that participants trained on a non-native contrast at night improved in discrimination 24 hours after training, while those trained in the morning showed no such improvement. Experiments 2 and 3 addressed the possibility that incidental exposure to perceptually similar native language speech sounds during the day interfered with maintenance in the morning group. Taken together, results show that the ultimate success of non-native speech sound learning depends not only on the similarity of learned sounds to the native language repertoire, but also to interference from native language sounds before sleep.
A target sound can become more audible and may ‘pop out’
from a simultaneously presented masker if the masker is presented first by
itself, as a precursor. This phenomenon, known as auditory enhancement, may
reflect the general perceptual principle of contrast enhancement, which
facilitates adaptation to ongoing acoustic conditions and the detection of new
events. Little is known about the mechanisms underlying enhancement, and
potential confounding factors have made the size of the effect and its time
course a point of contention. Here we measured enhancement as a function of
precursor duration and delay between precursor offset and target onset, using
two single-interval pitch comparison tasks, which involve either same-different
or up-down judgments, to avoid the potential confounds of earlier studies.
Although these two tasks elicit different levels of performance and may reflect
different underlying mechanisms, they produced similar amounts of enhancement.
The effect decreased with decreasing precursor duration, but remained present
for precursors as short as 62.5 milliseconds, and decreased with increasing gap
between the precursor and target, but remained measurable 1 second after the
precursor. Additional conditions, examining the effect of precursor/masker
similarity and the possible role of grouping and cueing, suggest multiple
sources of auditory enhancement.
Auditory perception; Contrast enhancement; Perceptual invariance
Accumulation of evidence models of perceptual decision making have been able to account for data from a wide range of domains at an impressive level of precision. In particular, Ratcliff’s (1978) diffusion model has been used across many different two-choice tasks in which the response is executed via a key-press. In this article we present two experiments in which we used a letter discrimination task exploring three central aspects of a two-choice task: the discriminability of the stimulus, the modality of the response execution (eye movement, key pressing, and pointing on a touchscreen), and the mapping of the response areas for the eye movement and the touch screen conditions (consistent vs. inconsistent). We fitted the diffusion model to the data from these experiments and examined the behavior of the model’s parameters. Fits of the model were consistent with the hypothesis that the same decision mechanism is used in the task with three different response methods. Drift rates are affected by the duration of the presentation of the stimulus, while the response execution time changed as a function of the response modality.
Alexithymia is a subclinical condition traditionally characterized by difficulties identifying and describing one’s own emotions. Recent formulations of alexithymia, however, suggest that the condition may result from a generalized impairment in the perception of all bodily signals (“interoception”). Interoceptive accuracy has been associated with a variety of deficits in social cognition, but recently with an improved ability to inhibit the automatic tendency to imitate the actions of others. The current study tested the consequences for social cognition of the hypothesized association between alexithymia and impaired interoception by examining the relationship between alexithymia and the ability to inhibit imitation. If alexithymia is best characterized as a general interoceptive impairment, then one would predict that alexithymia would have the same relationship with the ability to control imitation as does interoceptive accuracy. Forty-three healthy adults completed measures of alexithymia, imitation-inhibition, and as a control, inhibition of nonimitative spatial compatibility. Results revealed the predicted relationship, such that increasing alexithymia was associated with an improved ability to inhibit imitation, and that this relationship was specific to imitation-inhibition. These results support the characterization of alexithymia as a general interoceptive impairment and shed light on the social ability of alexithymic individuals—with implications for the multitude of psychiatric, neurological, and neurodevelopmental disorders associated with high rates of alexithymia.
alexithymia; interoception; imitation-inhibition; self–other processing
During reach-to-grasp movements, the hand is gradually molded to conform to the size and shape of the object to be grasped. Yet the ability to glean information about object properties by observing grasping movements is poorly understood. In this study, we capitalized on the effect of object size to investigate the ability to discriminate the size of an invisible object from movement kinematics. The study consisted of 2 phases. In the first action execution phase, to assess grip scaling, we recorded and analyzed reach-to-grasp movements performed toward differently sized objects. In the second action observation phase, video clips of the corresponding movements were presented to participants in a two-alternative forced-choice task. To probe discrimination performance over time, videos were edited to provide selective vision of different periods from 2 viewpoints. Separate analyses were conducted to determine how the participants’ ability to discriminate between stimulus alternatives (Type I sensitivity) and their metacognitive ability to discriminate between correct and incorrect responses (Type II sensitivity) varied over time and viewpoint. We found that as early as 80 ms after movement onset, participants were able to discriminate object size from the observation of grasping movements delivered from the lateral viewpoint. For both viewpoints, information pickup closely matched the evolution of the hand’s kinematics, reaching an almost perfect performance well before the fingers made contact with the object (60% of movement duration). These findings suggest that observers are able to decode object size from kinematic sources specified early on in the movement.
action prediction; reach-to-grasp; kinematics; time course; object size
During scene viewing, saccades directed toward a recently fixated location tend to be delayed relative to saccades in other directions (“delay effect”), an effect attributable to inhibition-of-return (IOR) and/or saccadic momentum (SM). Previous work indicates this effect may be task-specific, suggesting that gaze control parameters are task-relevant and potentially affected by task-switching. Accordingly, the present study investigated task-set control of gaze behavior using the delay effect as a measure of task performance. The delay effect was measured as the effect of relative saccade direction on preceding fixation duration. Participants were cued on each trial to perform either a search, memory, or rating task. Tasks were performed either in pure-task or mixed-task blocks. This design allowed separation of switch-cost and mixing-cost. The critical result was that expression of the delay effect at 2-back locations was reversed on switch versus repeat trials such that return was delayed in repeat trials but speeded in switch trials. This difference between repeat and switch trials suggests that gaze-relevant parameters may be represented and switched as part of a task-set. Existing and new tests for dissociating IOR and SM accounts of the delay effect converged on the conclusion that the delay at 2-back locations was due to SM, and that task-switching affects SM. Additionally, the new test simultaneously replicated non-corroborating results in the literature regarding facilitation-of-return (FOR), which confirmed its existence and showed that FOR is “reversed” SM that occurs when preceding and current saccades are both directed toward the 2-back location.
gaze control; task-switching; facilitation of return; saccadic momentum; oculomotor inhibition of return; eye movements; scene viewing; task-set
Theories of attention and visual search explain how attention is guided toward
objects with known target features. But can attention be directed away from objects with a
feature known to be associated only with distractors? Most studies have found that the
demand to maintain the to-be-avoided feature in visual working memory biases attention
toward matching objects rather than away from them. In contrast, Arita, Carlisle, and Woodman (2012) claimed that attention can be
configured to selectively avoid objects that match a cued distractor color, and they
reported evidence that this type of negative cue generates search benefits. However, the
colors of the search array items in Arita et al. were segregated by hemifield (e.g., blue
items on the left, red on the right), which allowed for a strategy of translating the
feature-cue information into a simple spatial template (e.g., avoid right, or attend
left). In the present study, we replicated the negative cue benefit using the Arita et al.
method (albeit within a subset of participants who reliably used the color cues to guide
attention). Then, we eliminated the benefit by using search arrays that could not be
grouped by hemifield. Our results suggest that feature-guided avoidance is implemented
only indirectly, in this case by translating feature-cue information into a spatial
visual attention; visual working memory; visual search; attentional control; exclusionary template
We performed two eye movement studies to explore whether readers can extract character or word frequency information from nonfixated-target words in Chinese reading. In Experiments 1A and 1B, we manipulated the character frequency of the first character in a two-character target word and the word frequency of a two-character target word, respectively. We found that fixation durations on the pre-target words were shorter when the first character of a two-character target word was presented with high frequency. Such effects were not observed for word frequency manipulations of a two-character target word. In particular, further analysis revealed that such effects only occurred for long pre-target fixations. These results for character and word frequency manipulations were replicated in a within-subjects design in Experiment 2. These findings are generally consistent with the notion that characters are processed in parallel during Chinese reading. However, we did not find evidence that words are processed in parallel during Chinese reading.
Chinese reading; eye movements; parafoveal-on-foveal effect; character processing; word processing
When engaged in a visual search for two targets, participants are slower and less accurate in their responses, relative to their performance when searching for singular targets. Previous work on this “dual-target cost” has primarily focused on the breakdown of attention guidance when looking for two items. Here, we investigated how object identification processes are affected by dual-target search. Our goal was to chart the speed at which distractors could be rejected, in order to assess whether dual-target search impairs object identification. To do so, we examined the capacity coefficient, which measures the speed at which decisions can be made, and provides a baseline of parallel performance against which to compare. We found that participants could search at or above this baseline, suggesting that dual-target search does not impair object identification abilities. We also found substantial differences in performance when participants were asked to search for simple versus complex images. Somewhat paradoxically, participants were able to reject complex images more rapidly than simple images. We suggest that this reflects the greater number of features that can be used to identify complex images, a finding that has important consequences for understanding object identification in visual search more generally.
visual search; capacity coefficient; dual-target search
Effective interpersonal coordination is fundamental to robust social interaction, and the ability to anticipate a co-actor's behavior is essential for achieving this coordination. However, coordination research has focused on the behavioral synchrony that occurs between the simple periodic movements of co-actors and, thus, little is known about the anticipation that occurs during complex, everyday interaction. Research on the dynamics of coupled neurons, human motor control, electrical circuits, and laser semiconductors universally demonstrates that small temporal feedback delays are necessary for the anticipation of chaotic events. We therefore investigated whether similar feedback delays would promote anticipatory behavior during social interaction. Results revealed that co-actors were not only able to anticipate others' chaotic movements when experiencing small perceptual-motor delays, but also exhibited movement patterns of equivalent complexity. This suggests that such delays, including those within the human nervous system, may enhance, rather than hinder, the anticipatory processes that underlie successful social interaction.
anticipatory synchronization; interpersonal coordination; chaos; global coordination; complexity matching
Recent EEG/MEG studies suggest that when contextual information is highly predictive of some property of a linguistic signal, expectations generated from context can be translated into surprisingly low-level estimates of the physical form-based properties likely to occur in subsequent portions of the unfolding signal. Whether form-based expectations are generated and assessed during natural reading, however, remains unclear. We monitored eye movements while participants read phonologically typical and atypical nouns in noun-predictive contexts (Experiment 1), demonstrating that when a noun is strongly expected, fixation durations on first-pass eye movement measures, including first fixation duration, gaze duration, and go-past times, are shorter for nouns with category typical form-based features. In Experiments 2 and 3, typical and atypical nouns were placed in sentential contexts normed to create expectations of variable strength for a noun. Context and typicality interacted significantly at gaze duration. These results suggest that during reading, form-based expectations that are translated from higher-level category-based expectancies can facilitate the processing of a word in context, and that their effect on lexical processing is graded based on the strength of category expectancy.
Previous research has shown that actions impair the visual perception of categorically action-consistent stimuli. On the other hand, actions can also facilitate the perception of spatially action-consistent stimuli. We suggest that motorvisual impairment is due to action planning processes, while motorvisual facilitation is due to action control mechanisms. This implies that because action planning is sensitive to modulations by cue-response mapping so should motorvisual impairment, while motorvisual facilitation should be insensitive to manipulations of cue-response mapping as is action control. We tested this prediction in three dual-task experiments. The impact of performing left and right key presses on the perception of unrelated, categorically or spatially consistent, stimuli was studied. As expected, we found motorvisual impairment for categorically consistent stimuli and motorvisual facilitation for spatially consistent stimuli. In all experiments, we compared congruent with incongruent cue-key mappings. Mapping manipulations affected motorvisual impairment, but not motorvisual facilitation. The results support our suggestion that motorvisual impairment is due to action planning, and motorvisual facilitation to action control.
motorvisual; action-perception; visual attention; dual-task paradigm
Visuospatial attention is strongly biased to locations that frequently contained a search target before. However, the function of this bias depends on the reference frame in which attended locations are coded. Previous research has shown a striking difference between tasks administered on a computer monitor and in a large environment, with the former inducing viewer-centered learning and the latter environment-centered learning. Why does environment-centered learning fail on a computer? Here we tested three possibilities: differences in spatial scale, nature of task, and locomotion may influence the reference frame of attention. Participants searched for a target on a monitor placed flat on a stand. On each trial they stood at a different location around the monitor. The target was frequently located in a fixed area of the monitor, but changes in participants’ perspective rendered this area random relative to the participants. Under incidental learning conditions participants failed to acquire environment-centered learning even when (i) the task and display resembled the large-scale task, and (ii) the search task required locomotion. The difficulty in inducing environment-centered learning on a computer underscores the egocentric nature of visual attention. It supports the idea that spatial scale modulates the reference frame of attention.
spatial attention; spatial reference frame; visual statistical learning; implicit learning
The ability to navigate without getting lost is an important aspect of quality of life. In five studies, we evaluated how spatial learning is affected by the increased demands of keeping oneself safe while walking with degraded vision (mobility monitoring). We proposed that safe low-vision mobility requires attentional resources, providing competition for those needed to learn a new environment. In Experiments 1 and 2 participants navigated along paths in a real-world indoor environment with simulated degraded vision or normal vision. Memory for object locations seen along the paths was better with normal compared to degraded vision. With degraded vision, memory was better when participants were guided by an experimenter (low monitoring demands) versus unguided (high monitoring demands). In Experiments 3 and 4, participants walked while performing an auditory task. Auditory task performance was superior with normal compared to degraded vision. With degraded vision, auditory task performance was better when guided compared to unguided. In Experiment 5, participants performed both the spatial learning and auditory tasks under degraded vision. Results showed that attention mediates the relationship between mobility-monitoring demands and spatial learning. These studies suggest that more attention is required and spatial learning is impaired when navigating with degraded viewing.
Visual selection can be biased toward nonspatial feature values such as color, but there is continued debate about whether this bias is subject to volitional control or whether it is an automatic bias toward recently seen target features (selection history). Although some studies have tried to separate these 2 sources of selection bias, mixed findings have not offered a clear resolution. The present work offers a possible explanation of conflicting findings by showing that the context in which a trial is presented can determine whether volitional control is observed. We used a cueing task that enabled independent assessments of the effects of color repetitions and current selection goals. When the target was presented among distractors with multiple colors (heterogeneous blocks), Experiment 1 revealed clear goal-driven selection effects, but these effects were eliminated when the target was a color singleton (pop-out blocks). When heterogeneous and pop-out displays were mixed within a block (Experiment 2), however, goal-driven selection was observed with both types of displays. In Experiment 3, this pattern was replicated using an encoding-limited task that included brief displays and an A′ measure of performance. Thus, goal-driven selection of nonspatial features is potentiated in contexts where there is strong competition with distractors. Selection history has powerful effects, but we find clear evidence that observers can exert volitional control over feature-based attention.
feature-based attention; goal-driven control; top-down control; biased competition
It is now well known that the absence of attention can leave people unaware of both visual and auditory stimuli (e.g., Dalton & Fraenkel, 2012; Mack & Rock, 1998). However, the possibility of similar effects within the tactile domain has received much less research. Here, we introduce a new tactile inattention paradigm and use it to test whether tactile awareness depends on the level of perceptual load in a concurrent visual task. Participants performed a visual search task of either low or high perceptual load, as well as responding to the presence or absence of a brief vibration delivered simultaneously to either the left or the right hand (50% of trials). Detection sensitivity to the clearly noticeable tactile stimulus was reduced under high (vs. low) visual perceptual load. These findings provide the first robust demonstration of “inattentional numbness,” as well as demonstrating that this phenomenon can be induced by concurrent visual perceptual load.
inattentional numbness; tactile awareness; perceptual load; multisensory processing
Previous studies have shown that the perceptual organization of the visual scene constrains the deployment of attention. Here we investigated how the organization of multiple elements into larger configurations alters their attentional weight, depending on the “pertinence” or behavioral importance of the elements’ features. We assessed object-based effects on distinct aspects of the attentional priority map: top-down control, reflecting the tendency to encode targets rather than distracters, and the spatial distribution of attention weights across the visual scene, reflecting the tendency to report elements belonging to the same rather than different objects. In 2 experiments participants had to report the letters in briefly presented displays containing 8 letters and digits, in which pairs of characters could be connected with a line. Quantitative estimates of top-down control were obtained using Bundesen’s Theory of Visual Attention (1990). The spatial distribution of attention weights was assessed using the “paired response index” (PRI), indicating responses for within-object pairs of letters. In Experiment 1, grouping along the task-relevant dimension (targets with targets and distracters with distracters) increased top-down control and enhanced the PRI; in contrast, task-irrelevant grouping (targets with distracters) did not affect performance. In Experiment 2, we disentangled the effect of target-target and distracter-distracter grouping: Pairwise grouping of distracters enhanced top-down control whereas pairwise grouping of targets changed the PRI. We conclude that object-based perceptual representations interact with pertinence values (of the elements’ features and location) in the computation of attention weights, thereby creating a widespread pattern of attentional facilitation across the visual scene.
perceptual organization; visual selection; visual short-term memory; attentional priority map; Theory of Visual Attention
Motor theories of expression perception posit that observers simulate facial expressions within their own motor system, aiding perception and interpretation. Consistent with this view, reports have suggested that blocking facial mimicry induces expression labeling errors and alters patterns of ratings. Crucially, however, it is unclear whether changes in labeling and rating behavior reflect genuine perceptual phenomena (e.g., greater internal noise associated with expression perception or interpretation) or are products of response bias. In an effort to advance this literature, the present study introduces a new psychophysical paradigm for investigating motor contributions to expression perception that overcomes some of the limitations inherent in simple labeling and rating tasks. Observers were asked to judge whether smiles drawn from a morph continuum were sincere or insincere, in the presence or absence of a motor load induced by the concurrent production of vowel sounds. Having confirmed that smile sincerity judgments depend on cues from both eye and mouth regions (Experiment 1), we demonstrated that vowel production reduces the precision with which smiles are categorized (Experiment 2). In Experiment 3, we replicated this effect when observers were required to produce vowels, but not when they passively listened to the same vowel sounds. In Experiments 4 and 5, we found that gender categorizations, equated for difficulty, were unaffected by vowel production, irrespective of the presence of a smiling expression. These findings greatly advance our understanding of motor contributions to expression perception and represent a timely contribution in light of recent high-profile challenges to the existing evidence base.
facial expressions; smile sincerity; mirror neurons; simulation; motor theories
Two visual-world experiments tested the hypothesis that expectations based on preceding prosody influence the perception of suprasegmental cues to lexical stress. The results demonstrate that listeners’ consideration of competing alternatives with different stress patterns (e.g., ‘jury/gi’raffe) can be influenced by the fundamental frequency and syllable timing patterns across material preceding a target word. When preceding stressed syllables distal to the target word shared pitch and timing characteristics with the first syllable of the target word, pictures of alternatives with primary lexical stress on the first syllable (e.g., jury) initially attracted more looks than alternatives with unstressed initial syllables (e.g., giraffe). This effect was modulated when preceding unstressed syllables had pitch and timing characteristics similar to the initial syllable of the target word, with more looks to alternatives with unstressed initial syllables (e.g., giraffe) than to those with stressed initial syllables (e.g., jury). These findings suggest that expectations about the acoustic realization of upcoming speech include information about metrical organization and lexical stress, and that these expectations constrain the initial interpretation of suprasegmental stress cues. These distal prosody effects implicate on-line probabilistic inferences about the sources of acoustic-phonetic variation during spoken-word recognition.
Prosody; spoken-word recognition; lexical stress; expectations; lexical competition
Many studies using cognitive tasks have found that stereotype threat, or concern about confirming a negative stereotype about one's group, debilitates performance. The few studies that documented similar effects on sensorimotor performance have used only relatively coarse measures to quantify performance. Three experiments tested the effect of stereotype threat on a rhythmic ball bouncing task, both at the novice and skilled level. Previous analysis of the task dynamics afforded more detailed quantification of the effect of threat on motor control. In this task, novices hit the ball with positive racket acceleration, indicative of unstable performance. With practice, they learn to stabilize error by changing their ball-racket impact from positive to negative acceleration. Results showed that for novices, stereotype threat potentiated hitting the ball with positive racket acceleration, leading to poorer performance of stigmatized females. However, when the threat manipulation was delivered after having acquired some skill, reflected by negative racket acceleration, the stigmatized females performed better. These findings are consistent with the mere effort account that argues that stereotype threat potentiates the most likely response on the given task. The study also demonstrates the value of identifying the control mechanisms through which stereotype threat has its effects on outcome measures.
Stereotype Threat; Mere Effort; Rhythmic Movements; Motor Control; Social Threat
The human perceptual-motor system is tightly coupled to the physical and informational dynamics of a task environment. These dynamics operate to constrain the high-dimensional order of the human movement system into low-dimensional, task-specific synergies—functional groupings of structural elements that are temporarily constrained to act as a single coordinated unit. The aim of the current study was to determine whether synergistic processes operate when coacting individuals coordinate to perform a discrete joint-action task. Pairs of participants sat next to each other and each used 1 arm to complete a pointer-to-target task. Using the uncontrolled manifold (UCM) analysis for the first time in a discrete joint action, the structure of joint-angle variance was examined to determine whether there was synergistic organization of the degrees of freedom employed at the interpersonal or intrapersonal levels. The results revealed that the motor actions performed by coactors were synergistically organized at both the interpersonal and intrapersonal levels. More importantly, however, the interpersonal synergy was found to be significantly stronger than the intrapersonal synergies. Accordingly, the results provide clear evidence that coacting individuals can become temporarily organized to form single synergistic 2-person systems during performance of a discrete joint action.
joint action; interpersonal coordination; motor synergies; motor control; uncontrolled manifold
Understanding stable patterns of interpersonal movement coordination is essential to understanding successful social interaction and activity (i.e., joint action). Previous research investigating such coordination has primarily focused on the synchronization of simple rhythmic movements (e.g., finger/forearm oscillations or pendulum swinging). Very few studies, however, have explored the stable patterns of coordination that emerge during task-directed complementary coordination tasks. Thus, the aim of the current study was to investigate and model the behavioral dynamics of a complementary collision-avoidance task. Participant pairs performed a repetitive targeting task in which they moved computer stimuli back and forth between sets of target locations without colliding into each other. The results revealed that pairs quickly converged onto a stable, asymmetric pattern of movement coordination that reflected differential control across participants, with 1 participant adopting a more straight-line movement trajectory between targets, and the other participant adopting a more elliptical trajectory between targets. This asymmetric movement pattern was also characterized by a phase lag between participants and was essential to task success. Coupling directionality analysis and dynamical modeling revealed that this dynamic regime was due to participant-specific differences in the coupling functions that defined the task-dynamics of participant pairs. Collectively, the current findings provide evidence that the dynamical coordination processes previously identified to underlie simple motor synchronization can also support more complex, goal-directed, joint action behavior, and can participate the spontaneous emergence of complementary joint action roles.
joint action; movement coordination; task-dynamics; perception-action
The segregation of sound sources from the mixture of sounds that enters the ear is a core capacity of human hearing, but the extent to which this process is dependent on attention remains unclear. This study investigated the effect of attention on the ability to segregate sounds via repetition. We utilized a dual task design in which stimuli to be segregated were presented along with stimuli for a “decoy” task that required continuous monitoring. The task to assess segregation presented a target sound 10 times in a row, each time concurrent with a different distractor sound. McDermott, Wrobleski, and Oxenham (2011) demonstrated that repetition causes the target sound to be segregated from the distractors. Segregation was queried by asking listeners whether a subsequent probe sound was identical to the target. A control task presented similar stimuli but probed discrimination without engaging segregation processes. We present results from 3 different decoy tasks: a visual multiple object tracking task, a rapid serial visual presentation (RSVP) digit encoding task, and a demanding auditory monitoring task. Load was manipulated by using high- and low-demand versions of each decoy task. The data provide converging evidence of a small effect of attention that is nonspecific, in that it affected the segregation and control tasks to a similar extent. In all cases, segregation performance remained high despite the presence of a concurrent, objectively demanding decoy task. The results suggest that repetition-based segregation is robust to inattention.
auditory scene analysis; attention; segregation; streaming; load