PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (847242)

Clipboard (0)
None

Related Articles

1.  Sensory information in perceptual-motor sequence learning: visual and/or tactile stimuli 
Sequence learning in serial reaction time (SRT) tasks has been investigated mostly with unimodal stimulus presentation. This approach disregards the possibility that sequence acquisition may be guided by multiple sources of sensory information simultaneously. In the current study we trained participants in a SRT task with visual only, tactile only, or bimodal (visual and tactile) stimulus presentation. Sequence performance for the bimodal and visual only training groups was similar, while both performed better than the tactile only training group. In a subsequent transfer phase, participants from all three training groups were tested in conditions with visual, tactile, and bimodal stimulus presentation. Sequence performance between the visual only and bimodal training groups again was highly similar across these identical stimulus conditions, indicating that the addition of tactile stimuli did not benefit the bimodal training group. Additionally, comparing across identical stimulus conditions in the transfer phase showed that the lesser sequence performance from the tactile only group during training probably did not reflect a difference in sequence learning but rather just a difference in expression of the sequence knowledge.
doi:10.1007/s00221-009-1903-5
PMCID: PMC2713025  PMID: 19565229
Experimental psychology; Motor learning; Sequence; Transfer of learning
2.  Is one enough? The case for non-additive influences of visual features on crossmodal Stroop interference 
When different perceptual signals arising from the same physical entity are integrated, they form a more reliable sensory estimate. When such repetitive sensory signals are pitted against other competing stimuli, such as in a Stroop Task, this redundancy may lead to stronger processing that biases behavior toward reporting the redundant stimuli. This bias would therefore, be expected to evoke greater incongruency effects than if these stimuli did not contain redundant sensory features. In the present paper we report that this is not the case for a set of three crossmodal, auditory-visual Stroop tasks. In these tasks participants attended to, and reported, either the visual or the auditory stimulus (in separate blocks) while ignoring the other, unattended modality. The visual component of these stimuli could be purely semantic (words), purely perceptual (colors), or the combination of both. Based on previous work showing enhanced crossmodal integration and visual search gains for redundantly coded stimuli, we had expected that relative to the single features, redundant visual features would have induced both greater visual distracter incongruency effects for attended auditory targets, and been less influenced by auditory distracters for attended visual targets. Overall, reaction times were faster for visual targets and were dominated by behavioral facilitation for the cross-modal interactions (relative to interference), but showed surprisingly little influence of visual feature redundancy. Post-hoc analyses revealed modest and trending evidence for possible increases in behavioral interference for redundant visual distracters on auditory targets, however, these effects were substantially smaller than anticipated and were not accompanied by a redundancy effect for behavioral facilitation or for attended visual targets.
doi:10.3389/fpsyg.2013.00799
PMCID: PMC3813948  PMID: 24198800
multisensory conflict; stroop task; redundancy gains; stimulus onset asynchrony (SOA)
3.  Different Neuroplasticity for Task Targets and Distractors 
PLoS ONE  2011;6(1):e15342.
Adult learning-induced sensory cortex plasticity results in enhanced action potential rates in neurons that have the most relevant information for the task, or those that respond strongly to one sensory stimulus but weakly to its comparison stimulus. Current theories suggest this plasticity is caused when target stimulus evoked activity is enhanced by reward signals from neuromodulatory nuclei. Prior work has found evidence suggestive of nonselective enhancement of neural responses, and suppression of responses to task distractors, but the differences in these effects between detection and discrimination have not been directly tested. Using cortical implants, we defined physiological responses in macaque somatosensory cortex during serial, matched, detection and discrimination tasks. Nonselective increases in neural responsiveness were observed during detection learning. Suppression of responses to task distractors was observed during discrimination learning, and this suppression was specific to cortical locations that sampled responses to the task distractor before learning. Changes in receptive field size were measured as the area of skin that had a significant response to a constant magnitude stimulus, and these areal changes paralleled changes in responsiveness. From before detection learning until after discrimination learning, the enduring changes were selective suppression of cortical locations responsive to task distractors, and nonselective enhancement of responsiveness at cortical locations selective for target and control skin sites. A comparison of observations in prior studies with the observed plasticity effects suggests that the non-selective response enhancement and selective suppression suffice to explain known plasticity phenomena in simple spatial tasks. This work suggests that differential responsiveness to task targets and distractors in primary sensory cortex for a simple spatial detection and discrimination task arise from nonselective increases in response over a broad cortical locus that includes the representation of the task target, and selective suppression of responses to the task distractor within this locus.
doi:10.1371/journal.pone.0015342
PMCID: PMC3031528  PMID: 21297962
4.  The limits and motivating potential of sensory stimuli as reinforcers for autistic children. 
This study investigated the reinforcing properties, limits, and motivating potentials of sensory stimuli with autistic children. In the first phase of the study, four intellectually retarded autistic children were exposed to three different types of sensory stimulation (vibration, music, and strobe light) as well as edible and social reinforcers for ten-second intervals contingent upon six simple bar pressing responses. In the second phase, the same events were used as reinforcers for correct responses in learning object labels. The results indicated that: (a) sensory stimuli can be used effectively as reinforcers to maintain high, durable rates of responding in a simple pressing task; (b) ranked preferences for sensory stimuli revealed a unique configuration of responding for each child; and (c) sensory stimuli have motivating potentials comparable to those of the traditional food and social reinforcers even when training receptive language tasks.
doi:10.1901/jaba.1981.14-339
PMCID: PMC1308219  PMID: 7298542
5.  The Neural Basis of Implicit Perceptual Sequence Learning 
The present fMRI study investigated the neural areas involved in implicit perceptual sequence learning. To obtain more insight in the functional contributions of the brain areas, we tracked both the behavioral and neural time course of the learning process, using a perceptual serial color matching task. Next, to investigate whether the neural time course was specific for perceptual information, imaging results were compared to the results of implicit motor sequence learning, previously investigated using an identical serial color matching task (Gheysen et al., 2010). Results indicated that implicit sequences can be acquired by at least two neural systems: the caudate nucleus and the hippocampus, having different operating principles. The caudate nucleus contributed to the implicit sequence learning process for perceptual as well as motor information in a similar and gradual way. The hippocampus, on the other hand, was engaged in a much faster learning process which was more pronounced for the motor compared to the perceptual task. Interestingly, the perceptual and motor learning process occurred on a comparable implicit level, suggesting that consciousness is not the main determinant factor dissociating the hippocampal from the caudate learning system. This study is not only the first to successfully and unambiguously compare brain activation between perceptual and motor levels of implicit sequence learning, it also provides new insights into the specific hippocampal and caudate learning function.
doi:10.3389/fnhum.2011.00137
PMCID: PMC3213531  PMID: 22087090
implicit sequence learning; perceptual sequence learning; motor sequence learning; fMRI; caudate nucleus; hippocampus
6.  Mismatching Amodal Redundancy Inhibits Operant Learning in 5-month-old Infants 
Infant behavior & development  2012;35(3):360-368.
The current study examined the functional role redundant amodal information plays in an operant learning task in 5-month-old human infants. Prior studies have suggested that both simple and complex learning processes (discrimination, associative conditioning) are facilitated when amodal information is presented redundantly across sensory modalities. These studies, however, did not test whether the amodal information had to be similar across modalities for facilitation to occur. The current study examined how both matching and mismatching redundant amodal information about the shape of an object would influence learning of an operant response in human infants. Infants learned an operant kick response to move a mobile of cylinders while either holding a cylinder, a rectangular cube, or no object. Kick rate served as the dependent measure. The results showed that infants given mismatching redundant amodal information (e.g., viewed cylinders while holding a rectangular cube) showed inhibited operant learning. These results extend the Intersensory Redundancy Hypothesis by demonstrating that amodal redundancy can function in some instances to inhibit complex learning processes.
doi:10.1016/j.infbeh.2012.04.004
PMCID: PMC3409334  PMID: 22721736
Amodal redundancy; Operant learning; Inhibition; Multisensory processing; Shape; Human infants
7.  Perceptual sequence learning in a serial reaction time task 
In the serial reaction time task (SRTT), a sequence of visuo-spatial cues instructs subjects to perform a sequence of movements which follow a repeating pattern. Though motor responses are known to support implicit sequence learning in this task, the goal of the present experiments is to determine whether observation of the sequence of cues alone can also yield evidence of implicit sequence learning. This question has been difficult to answer because in previous research, performance improvements which appeared to be due to implicit perceptual sequence learning could also be due to spontaneous increases in explicit knowledge of the sequence. The present experiments use probabilistic sequences to prevent the spontaneous development of explicit awareness. They include a training phase, during which half of the subjects observe and the other half respond, followed by a transfer phase in which everyone responds. Results show that observation alone can support sequence learning, which translates at transfer into equivalent performance as that of a group who made motor responses during training. However, perceptual learning or its expression is sensitive to changes in target colors, and its expression is impaired by concurrent explicit search. Motor-response based learning is not affected by these manipulations. Thus, observation alone can support implicit sequence learning, even of higher order probabilistic sequences. However, perceptual learning can be prevented or concealed by variations of stimuli or task demands.
doi:10.1007/s00221-008-1411-z
PMCID: PMC2672106  PMID: 18478209
Implicit; Explicit; Perceptual Learning; Sequence Learning; Motor Learning
8.  Generalized lessons about sequence learning from the study of the serial reaction time task 
Advances in Cognitive Psychology  2012;8(2):165-178.
Over the last 20 years researchers have used the serial reaction time (SRT) task to investigate the nature of spatial sequence learning. They have used the task to identify the locus of spatial sequence learning, identify situations that enhance and those that impair learning, and identify the important cognitive processes that facilitate this type of learning. Although controversies remain, the SRT task has been integral in enhancing our understanding of implicit sequence learning. It is important, however, to ask what, if anything, the discoveries made using the SRT task tell us about implicit learning more generally. This review analyzes the state of the current spatial SRT sequence learning literature highlighting the stimulus-response rule hypothesis of sequence learning which we believe provides a unifying account of discrepant SRT data. It also challenges researchers to use the vast body of knowledge acquired with the SRT task to understand other implicit learning literatures too often ignored in the context of this particular task. This broad perspective will make it possible to identify congruences among data acquired using various different tasks that will allow us to generalize about the nature of implicit learning.
doi:10.2478/v10053-008-0113-1
PMCID: PMC3376886  PMID: 22723815
sequence learning; implicit learning; serial reaction time task
9.  Performance differences in visually- and internally-guided continuous manual tracking movements 
Control of familiar visually-guided movements involves internal plans as well as visual and other online sensory information, though how visual and internal plans combine for reaching movements remain unclear. Traditional motor sequence learning tasks, such as the serial reaction time task, use stereotyped movements and measure only reaction time. Here, we used a continuous sequential reaching task comprised of naturalistic movements, in order to provide detailed kinematic performance measures. When we embedded pre-learned trajectories (those presumably having an internal plan) within similar but unpredictable movement sequences, participants performed the two kinds of movements with remarkable similarity, and position error alone could not reliably identify the epoch. For such embedded movements, performance during pre-learned sequences showed statistically significant but trivial decreases in measures of kinematic error, compared to performance during novel sequences. However, different sets of kinematic error variables changed significantly between learned and novel sequences for individual participants, suggesting that each participant used distinct motor strategies favoring different kinematic variables during each of the two movement types. Algorithms that incorporated multiple kinematic variables identified transitions between the two movement types well but imperfectly. Hidden Markov model classification differentiated learned and novel movements on single trials based on the above kinematic error variables with 82±5% accuracy within 244±696 ms, despite the limited extent of changes in those errors. These results suggest that the motor system can achieve markedly similar performance whether or not an internal plan is present, as only subtle changes arise from any difference between the neural substrates involved in those two conditions.
doi:10.1007/s00221-008-1489-3
PMCID: PMC2574818  PMID: 18648785
Motor control; Reaching; Human; Sequence Learning; Hidden Markov models
10.  Sensory-motor mechanisms in human parietal cortex underlie arbitrary visual decisions 
Nature neuroscience  2008;11(12):10.1038/nn.2221.
The neural mechanism underlying simple perceptual decision-making in monkeys has been recently conceptualized as an integrative process in which sensory evidence supporting different response options accumulates gradually over time. For example, intraparietal neurons accumulate over time motion information in favour of a specific oculomotor choice. It is unclear, however, whether this mechanism generalizes to more complex decisions based on arbitrary stimulus- response associations. Here, in a task requiring to arbitrarily associate visual stimuli (faces or places) with different actions (eye or hand-pointing movements), we show that activity of effector-specific regions in human posterior parietal cortex reflects the ‘strength’ of the sensory evidence in favour of the preferred response. These regions, which do not respond to sensory stimuli per se, integrate after learning sensory evidence toward the outcome of an arbitrary decision. We conclude that even arbitrary decisions can be mediated by sensory-motor mechanisms completely triggered by contextual stimulus-response associations.
doi:10.1038/nn.2221
PMCID: PMC3861399  PMID: 18997791
11.  Spatial and nonspatial implicit motor learning in Korsakoff’s amnesia: evidence for selective deficits 
Patients with amnesia have deficits in declarative memory but intact memory for motor and perceptual skills, which suggests that explicit memory and implicit memory are distinct. However, the evidence that implicit motor learning is intact in amnesic patients is contradictory. This study investigated implicit sequence learning in amnesic patients with Korsakoff’s syndrome (N = 20) and matched controls (N = 14), using the classical Serial Reaction Time Task and a newly developed Pattern Learning Task in which the planning and execution of the responses are more spatially demanding. Results showed that implicit motor learning occurred in both groups of participants; however, on the Pattern Learning Task, the percentage of errors did not increase in the Korsakoff group in the random test phase, which is indicative of less implicit learning. Thus, our findings show that the performance of patients with Korsakoff’s syndrome is compromised on an implicit learning task with a strong spatial response component.
doi:10.1007/s00221-011-2841-6
PMCID: PMC3178790  PMID: 21853284
Korsakoff’s syndrome; Amnesia; Implicit learning; Motor learning; Sequence learning; Memory
12.  Who Learns More? Cultural Differences in Implicit Sequence Learning 
PLoS ONE  2013;8(8):e71625.
Background
It is well documented that East Asians differ from Westerners in conscious perception and attention. However, few studies have explored cultural differences in unconscious processes such as implicit learning.
Methodology/Principal Findings
The global-local Navon letters were adopted in the serial reaction time (SRT) task, during which Chinese and British participants were instructed to respond to global or local letters, to investigate whether culture influences what people acquire in implicit sequence learning. Our results showed that from the beginning British expressed a greater local bias in perception than Chinese, confirming a cultural difference in perception. Further, over extended exposure, the Chinese learned the target regularity better than the British when the targets were global, indicating a global advantage for Chinese in implicit learning. Moreover, Chinese participants acquired greater unconscious knowledge of an irrelevant regularity than British participants, indicating that the Chinese were more sensitive to contextual regularities than the British.
Conclusions/Significance
The results suggest that cultural biases can profoundly influence both what people consciously perceive and unconsciously learn.
doi:10.1371/journal.pone.0071625
PMCID: PMC3737123  PMID: 23940773
13.  The neural basis for combinatorial coding in a cortical population response 
We have used a combination of theory and experiment to assess how information is represented in a realistic cortical population response, examining how motion direction and timing is encoded in groups of neurons in cortical area MT. Combining data from several single unit experiments, we constructed model population responses in small time windows, and represented the response in each window as a binary vector of 1's or 0's signifying spikes or no spikes from each cell. We found that patterns of spikes and silence across a population of nominally redundant neurons can carry up to twice as much information about visual motion than does population spike count, even when the neurons respond independently to their sensory inputs. This extra information arises by virtue of the broad diversity of firing rate dynamics found in even very similarly tuned groups of MT neurons. Additionally, specific patterns of spiking and silence can carry more information than the sum of their parts (synergy), opening up the possibility for combinatorial coding in cortex. These results also held for populations in which we imposed levels of non-independence (correlation) comparable to those found in cortical recordings. Our findings suggest that combinatorial codes are advantageous for representing stimulus information on short time scales, even when neurons have no complicated, stimulus-dependent correlation structure.
doi:10.1523/JNEUROSCI.4390-08.2008
PMCID: PMC2693376  PMID: 19074026
visual motion; MT; Spike Trains; Information Theory; Smooth Pursuit; Correlated variability
14.  Combining Symbolic Cues with Sensory Input and Prior Experience in an Iterative Bayesian Framework 
Perception and action are the result of an integration of various sources of information, such as current sensory input, prior experience, or the context in which a stimulus occurs. Often, the interpretation is not trivial hence needs to be learned from the co-occurrence of stimuli. Yet, how do we combine such diverse information to guide our action? Here we use a distance production-reproduction task to investigate the influence of auxiliary, symbolic cues, sensory input, and prior experience on human performance under three different conditions that vary in the information provided. Our results indicate that subjects can (1) learn the mapping of a verbal, symbolic cue onto the stimulus dimension and (2) integrate symbolic information and prior experience into their estimate of displacements. The behavioral results are explained by to two distinct generative models that represent different structural approaches of how a Bayesian observer would combine prior experience, sensory input, and symbolic cue information into a single estimate of displacement. The first model interprets the symbolic cue in the context of categorization, assuming that it reflects information about a distinct underlying stimulus range (categorical model). The second model applies a multi-modal integration approach and treats the symbolic cue as additional sensory input to the system, which is combined with the current sensory measurement and the subjects’ prior experience (cue-combination model). Notably, both models account equally well for the observed behavior despite their different structural assumptions. The present work thus provides evidence that humans can interpret abstract symbolic information and combine it with other types of information such as sensory input and prior experience. The similar explanatory power of the two models further suggest that issues such as categorization and cue-combination could be explained by alternative probabilistic approaches.
doi:10.3389/fnint.2012.00058
PMCID: PMC3417299  PMID: 22905024
pre-cueing; path integration; cue-combination; multi-modal; categorization; experience-dependent prior; magnitude reproduction; iterative Bayes
15.  Speed and Accuracy of Visual Motion Discrimination by Rats 
PLoS ONE  2013;8(6):e68505.
Animals must continuously evaluate sensory information to select the preferable among possible actions in a given context, including the option to wait for more information before committing to another course of action. In experimental sensory decision tasks that replicate these features, reaction time distributions can be informative about the implicit rules by which animals determine when to commit and what to do. We measured reaction times of Long-Evans rats discriminating the direction of motion in a coherent random dot motion stimulus, using a self-paced two-alternative forced-choice (2-AFC) reaction time task. Our main findings are: (1) When motion strength was constant across trials, the error trials had shorter reaction times than correct trials; in other words, accuracy increased with response latency. (2) When motion strength was varied in randomly interleaved trials, accuracy increased with motion strength, whereas reaction time decreased. (3) Accuracy increased with reaction time for each motion strength considered separately, and in the interleaved motion strength experiment overall. (4) When stimulus duration was limited, accuracy improved with stimulus duration, whereas reaction time decreased. (5) Accuracy decreased with response latency after stimulus offset. This was the case for each stimulus duration considered separately, and in the interleaved duration experiment overall. We conclude that rats integrate visual evidence over time, but in this task the time of their response is governed more by elapsed time than by a criterion for sufficient evidence.
doi:10.1371/journal.pone.0068505
PMCID: PMC3695925  PMID: 23840856
16.  Inverted-U Function Relating Cortical Plasticity and Task Difficulty 
Neuroscience  2012;205:81-90.
Many psychological and physiological studies with simple stimuli have suggested that perceptual learning specifically enhances the response of primary sensory cortex to task-relevant stimuli. The aim of this study was to determine whether auditory discrimination training on complex tasks enhances primary auditory cortex responses to a target sequence relative to non-target and novel sequences. We collected responses from more than 2,000 sites in 31 rats trained on one of six discrimination tasks that differed primarily in the similarity of the target and distractor sequences. Unlike training with simple stimuli, long-term training with complex stimuli did not generate target specific enhancement in any of the groups. Instead, cortical receptive field size decreased, latency decreased, and paired pulse depression decreased in rats trained on the tasks of intermediate difficulty while tasks that were too easy or too difficult either did not alter or degraded cortical responses. These results suggest an inverted-U function relating neural plasticity and task difficulty.
doi:10.1016/j.neuroscience.2011.12.056
PMCID: PMC3299820  PMID: 22249158
task difficulty; sequence learning; cortical plasticity; auditory cortex; operant training
17.  Implicit learning of what comes when and where within a sequence: The time-course of acquiring serial position-item and item-item associations to represent serial order 
Much research has been conducted aimed at the representations and mechanisms that enable learning of sequential structures. A central debate concerns the question whether item-item associations (i.e., in the sequence A-B-C-D, B comes after A) or associations of item and serial list position (i.e., B is the second item in the list) are used to represent serial order. Previously, we showed that in a variant of the implicit serial reaction time task, the sequence representation contains associations between serial position and item information (Schuck, Gaschler, Keisler, & Frensch, 2011). Here, we applied models and research methods from working memory research to implicit serial learning to replicate and extend our findings. The experiment involved three sessions of sequence learning. Results support the view that participants acquire knowledge about order structure (item-item associations) and about ordinal structure (serial position-item associations). Analyses suggest that only the simultaneous use of the two types of knowledge acquisition can explain learning-related performance increases. Additionally, our results indicate that serial list position information plays a role very early in learning and that inter-item associations increasingly control behavior in later stages.
doi:10.2478/v10053-008-0106-0
PMCID: PMC3367904  PMID: 22679464
implicit sequence learning; serial order; SRT; chaining; race model
18.  Social intuition as a form of implicit learning: Sequences of body movements are learned less explicitly than letter sequences 
Advances in Cognitive Psychology  2012;8(2):121-131.
In the current paper, we first evaluate the suitability of traditional serial reaction time (SRT) and artificial grammar learning (AGL) experiments for measuring implicit learning of social signals. We then report the results of a novel sequence learning task which combines aspects of the SRT and AGL paradigms to meet our suggested criteria for how implicit learning experiments can be adapted to increase their relevance to situations of social intuition. The sequences followed standard finite-state grammars. Sequence learning and consciousness of acquired knowledge were compared between 2 groups of 24 participants viewing either sequences of individually presented letters or sequences of body-posture pictures, which were described as series of yoga movements. Participants in both conditions showed above-chance classification accuracy, indicating that sequence learning had occurred in both stimulus conditions. This shows that sequence learning can still be found when learning procedures reflect the characteristics of social intuition. Rule awareness was measured using trial-by-trial evaluation of decision strategy (Dienes & Scott, 2005; Scott & Dienes, 2008). For letters, sequence classification was best on trials where participants reported responding on the basis of explicit rules or memory, indicating some explicit learning in this condition. For body-posture, classification was not above chance on these types of trial, but instead showed a trend to be best on those trials where participants reported that their responses were based on intuition, familiarity, or random choice, suggesting that learning was more implicit. Results therefore indicate that the use of traditional stimuli in research on sequence learning might underestimate the extent to which learning is implicit in domains such as social learning, contributing to ongoing debate about levels of conscious awareness in implicit learning.
doi:10.2478/v10053-008-0109-x
PMCID: PMC3367869  PMID: 22679467
implicit learning; social intuition; intuition; artificial grammar learning; human movement; consciousness; fringe consciousness
19.  Incidental Learning of Temporal Structures Conforming to a Metrical Framework 
Implicit learning of sequential structures has been investigated mostly for visual, spatial, or motor learning, but rarely for temporal structure learning. The few experiments investigating temporal structure learning have concluded that temporal structures can be learned only when coupled with another structural dimension, such as musical pitch or spatial location. In these studies, the temporal structures were without metrical organization and were dependent upon participants’ response times (Response-to-Stimulus Intervals). In our study, two experiments investigated temporal structure learning based on Inter-Onset-Intervals in the presence of an uncorrelated second dimension (ordinal structure) with metrically organized temporal structures. Our task was an adaptation of the classical Serial Reaction Time paradigm, using an implicit task in the auditory domain (syllable identification). Reaction times (RT) revealed that participants learned the temporal structures over the exposure blocks (decrease in RT) without a correlated ordinal dimension. The introduction of a test block with a novel temporal structure slowed RT and exemplified the typical implicit learning profile. Post-test results suggested that participants did not have explicit knowledge of the metrical temporal structures. These findings provide the first evidence of the learning of temporal structure with an uncorrelated ordinal structure, and set a foundation for further investigation of temporal cognition.
doi:10.3389/fpsyg.2012.00294
PMCID: PMC3425964  PMID: 22936921
temporal cognition; metrical organization; serial reaction time task; implicit learning; incidental learning; auditory modality
20.  Double Dissociation Between Action-driven and Perception-driven Conflict Resolution Invoking Anterior versus Posterior Brain Systems 
NeuroImage  2009;48(2):381-390.
The ability to select and integrate relevant information in the presence of competing irrelevant information can be enhanced by advance information to direct attention and guide response selection. Attentional preparation can reduce perceptual and response conflict, yet little is known about the neural source of conflict resolution, whether it is resolved by modulating neural responses for perceptual selection to emphasize task-relevant information or for action selection to inhibit pre-potent responses to interfering information. We manipulated perceptual information that either matched or did not match the relevant color feature of an upcoming Stroop stimulus and recorded hemodynamic brain responses to these events. Longer reaction times to incongruent than congruent color-word Stroop stimuli indicated conflict; however, conflict was even greater when a color cue correctly predicted the Stroop target’s color (match) than when it did not (nonmatch). A predominantly anterior network was activated for Stroop-match and a predominantly posterior network was activated for Stroop-nonmatch. Thus, when a stimulus feature did not match the expected feature, a perceptually-driven posterior attention system was engaged, whereas when interfering, automatically-processed semantic information required inhibition of pre-potent responses, an action-driven anterior control system was engaged. These findings show a double dissociation of anterior and posterior cortical systems engaging in different types of control for perceptually-driven and action-driven conflict resolution.
doi:10.1016/j.neuroimage.2009.06.058
PMCID: PMC2753237  PMID: 19573610
Attention; Conflict; Control; fMRI; Perceptual Cueing
21.  Lesions of reuniens and rhomboid thalamic nuclei impair radial maze win-shift performance 
Hippocampus  2010;21(8):815-826.
The reuniens (Re) and rhomboid (Rh) nuclei are major sources of thalamic input to hippocampus and medial prefrontal cortex. We compared effects of lesions in ReRh and other parts of the midline-intralaminar complex on tasks affected by lesions in terminal fields innervated by these nuclei, including: visuospatial reaction time (VSRT), a measure of sensory guided responding; serial VSRT, a measure of action sequence learning; and win/shift radial arm maze (RAM) measures of spatial memory. ReRh lesions affected RAM, but not VSRT or serial VSRT performance. The effects of caudal intralaminar lesions were doubly dissociated from ReRh lesions, affecting VSRT, but not RAM or serial VSRT performance. Rostral intralaminar lesions did not produce significant impairments, other than a subgroup with larger lesions that were impaired performing a delayed RAM task. Combined lesions damaging all three sites produced RAM deficits comparable to ReRh lesions and VSRT deficits comparable to caudal intralaminar lesions. Thus there was no indication that deficits produced by lesions in one site were exacerbated significantly by the cumulative effect of damage in other parts of the midline-intralaminar complex. The effects of ReRh lesions provide evidence that these nuclei affect memory functions of hippocampus and medial prefrontal cortex. The double dissociation observed between the effects of ReRh and caudal intralaminar nuclei provides evidence that different nuclei within the midline-intralaminar complex affect distinct aspects of cognition consistent with the effects of lesions in the terminal fields they innervate.
doi:10.1002/hipo.20797
PMCID: PMC2974946  PMID: 20572196
diencephalic amnesia; midline and intralaminar thalamic nuclei; visuospatial reaction time; motor sequence learning
22.  Just Do It: Action-Dependent Learning Allows Sensory Prediction 
PLoS ONE  2011;6(10):e26020.
Sensory-motor learning is commonly considered as a mapping process, whereby sensory information is transformed into the motor commands that drive actions. However, this directional mapping, from inputs to outputs, is part of a loop; sensory stimuli cause actions and vice versa. Here, we explore whether actions affect the understanding of the sensory input that they cause. Using a visuo-motor task in humans, we demonstrate two types of learning-related behavioral effects. Stimulus-dependent effects reflect stimulus-response learning, while action-dependent effects reflect a distinct learning component, allowing the brain to predict the forthcoming sensory outcome of actions. Together, the stimulus-dependent and the action-dependent learning components allow the brain to construct a complete internal representation of the sensory-motor loop.
doi:10.1371/journal.pone.0026020
PMCID: PMC3187836  PMID: 21998746
23.  Selective Impairments in Implicit Learning in Parkinson’s Disease 
Brain research  2006;1137(1):104-110.
The basal ganglia are thought to participate in implicit sequence learning. However, the exact nature of this role has been difficult to determine in light of the conflicting evidence on implicit learning in subjects with Parkinson’s disease (PD). We examined the performance of PD subjects using a modified form of the serial reaction time task, which ensured that learning remained implicit. Subjects with predominantly right-sided symptoms were trained on a 12-element sequence using the right hand. Although there was no evidence of sequence learning on the basis of response time savings, the subjects showed knowledge of the sequence when performance was assessed in terms of the number of errors made. This effect transferred to the left (untrained) hand as well. Thus, these data demonstrate that PD patients are not impaired at implicitly learning sequential order, but rather at the translation of sequence knowledge into rapid motor performance. Furthermore, the results suggest that the basal ganglia are not essential for implicit sequence learning in PD.
doi:10.1016/j.brainres.2006.12.057
PMCID: PMC1865108  PMID: 17239828
sequence learning; parkinson’s disease; implicit; serial reaction time
24.  A Sequence Identification Measurement Model to Investigate the Implicit Learning of Metrical Temporal Patterns 
PLoS ONE  2013;8(9):e75163.
Implicit learning (IL) occurs unconsciously and without intention. Perceptual fluency is the ease of processing elicited by previous exposure to a stimulus. It has been assumed that perceptual fluency is associated with IL. However, the role of perceptual fluency following IL has not been investigated in temporal pattern learning. Two experiments by Schultz, Stevens, Keller, and Tillmann demonstrated the IL of auditory temporal patterns using a serial reaction-time task and a generation task based on the process dissociation procedure. The generation task demonstrated that learning was implicit in both experiments via motor fluency, that is, the inability to suppress learned information. With the aim to disentangle conscious and unconscious processes, we analyze unreported recognition data associated with the Schultz et al. experiments using the sequence identification measurement model. The model assumes that perceptual fluency reflects unconscious processes and IL. For Experiment 1, the model indicated that conscious and unconscious processes contributed to recognition of temporal patterns, but that unconscious processes had a greater influence on recognition than conscious processes. In the model implementation of Experiment 2, there was equal contribution of conscious and unconscious processes in the recognition of temporal patterns. As Schultz et al. demonstrated IL in both experiments using a generation task, and the conditions reported here in Experiments 1 and 2 were identical, two explanations are offered for the discrepancy in model and behavioral results based on the two tasks: 1) perceptual fluency may not be necessary to infer IL, or 2) conscious control over implicitly learned information may vary as a function of perceptual fluency and motor fluency.
doi:10.1371/journal.pone.0075163
PMCID: PMC3783451  PMID: 24086461
25.  Low-level sensory plasticity during task-irrelevant perceptual learning: Evidence from conventional and double training procedures 
Vision research  2009;50(4):424-432.
Studies of perceptual learning have focused on aspects of learning that are related to early stages of sensory processing. However, conclusions that perceptual learning results in low-level sensory plasticity are controversial, since such learning may also be attributed to plasticity in later stages of sensory processing or in readout from sensory to decision stages, or to changes in high-level central processing. To address this controversy, we developed a novel random dot motion (RDM) stimulus to target motion cells selective to contrast polarity by ensuring the motion direction information arises only from signal dot onsets and not their offsets, and used these stimuli in the paradigm of task-irrelevant perceptual learning (TIPL). In TIPL, learning is achieved in response to a stimulus by subliminally pairing that stimulus with the targets of an unrelated training task. In this manner, we are able to probe learning for an aspect of motion processing thought to be a function of directional V1 simple cells with a learning procedure that dissociates the learned stimulus from the decision processes relevant to the training task. Our results show direction-selective learning for the designated contrast polarity that does not transfer to the opposite contrast polarity. This polarity specificity was replicated in a double training procedure in which subjects were additionally exposed to the opposite polarity. Taken together, these results suggest that TIPL for motion stimuli may occur at the stage of directional V1 simple cells. Finally, a theoretical explanation is provided to understand the data.
doi:10.1016/j.visres.2009.09.022
PMCID: PMC2824078  PMID: 19800358

Results 1-25 (847242)