Sequence learning in serial reaction time (SRT) tasks has been investigated mostly with unimodal stimulus presentation. This approach disregards the possibility that sequence acquisition may be guided by multiple sources of sensory information simultaneously. In the current study we trained participants in a SRT task with visual only, tactile only, or bimodal (visual and tactile) stimulus presentation. Sequence performance for the bimodal and visual only training groups was similar, while both performed better than the tactile only training group. In a subsequent transfer phase, participants from all three training groups were tested in conditions with visual, tactile, and bimodal stimulus presentation. Sequence performance between the visual only and bimodal training groups again was highly similar across these identical stimulus conditions, indicating that the addition of tactile stimuli did not benefit the bimodal training group. Additionally, comparing across identical stimulus conditions in the transfer phase showed that the lesser sequence performance from the tactile only group during training probably did not reflect a difference in sequence learning but rather just a difference in expression of the sequence knowledge.
Experimental psychology; Motor learning; Sequence; Transfer of learning
The present fMRI study investigated the neural areas involved in implicit perceptual sequence learning. To obtain more insight in the functional contributions of the brain areas, we tracked both the behavioral and neural time course of the learning process, using a perceptual serial color matching task. Next, to investigate whether the neural time course was specific for perceptual information, imaging results were compared to the results of implicit motor sequence learning, previously investigated using an identical serial color matching task (Gheysen et al., 2010). Results indicated that implicit sequences can be acquired by at least two neural systems: the caudate nucleus and the hippocampus, having different operating principles. The caudate nucleus contributed to the implicit sequence learning process for perceptual as well as motor information in a similar and gradual way. The hippocampus, on the other hand, was engaged in a much faster learning process which was more pronounced for the motor compared to the perceptual task. Interestingly, the perceptual and motor learning process occurred on a comparable implicit level, suggesting that consciousness is not the main determinant factor dissociating the hippocampal from the caudate learning system. This study is not only the first to successfully and unambiguously compare brain activation between perceptual and motor levels of implicit sequence learning, it also provides new insights into the specific hippocampal and caudate learning function.
implicit sequence learning; perceptual sequence learning; motor sequence learning; fMRI; caudate nucleus; hippocampus
Control of familiar visually-guided movements involves internal plans as well as visual and other online sensory information, though how visual and internal plans combine for reaching movements remain unclear. Traditional motor sequence learning tasks, such as the serial reaction time task, use stereotyped movements and measure only reaction time. Here, we used a continuous sequential reaching task comprised of naturalistic movements, in order to provide detailed kinematic performance measures. When we embedded pre-learned trajectories (those presumably having an internal plan) within similar but unpredictable movement sequences, participants performed the two kinds of movements with remarkable similarity, and position error alone could not reliably identify the epoch. For such embedded movements, performance during pre-learned sequences showed statistically significant but trivial decreases in measures of kinematic error, compared to performance during novel sequences. However, different sets of kinematic error variables changed significantly between learned and novel sequences for individual participants, suggesting that each participant used distinct motor strategies favoring different kinematic variables during each of the two movement types. Algorithms that incorporated multiple kinematic variables identified transitions between the two movement types well but imperfectly. Hidden Markov model classification differentiated learned and novel movements on single trials based on the above kinematic error variables with 82±5% accuracy within 244±696 ms, despite the limited extent of changes in those errors. These results suggest that the motor system can achieve markedly similar performance whether or not an internal plan is present, as only subtle changes arise from any difference between the neural substrates involved in those two conditions.
Motor control; Reaching; Human; Sequence Learning; Hidden Markov models
Adult learning-induced sensory cortex plasticity results in enhanced action potential rates in neurons that have the most relevant information for the task, or those that respond strongly to one sensory stimulus but weakly to its comparison stimulus. Current theories suggest this plasticity is caused when target stimulus evoked activity is enhanced by reward signals from neuromodulatory nuclei. Prior work has found evidence suggestive of nonselective enhancement of neural responses, and suppression of responses to task distractors, but the differences in these effects between detection and discrimination have not been directly tested. Using cortical implants, we defined physiological responses in macaque somatosensory cortex during serial, matched, detection and discrimination tasks. Nonselective increases in neural responsiveness were observed during detection learning. Suppression of responses to task distractors was observed during discrimination learning, and this suppression was specific to cortical locations that sampled responses to the task distractor before learning. Changes in receptive field size were measured as the area of skin that had a significant response to a constant magnitude stimulus, and these areal changes paralleled changes in responsiveness. From before detection learning until after discrimination learning, the enduring changes were selective suppression of cortical locations responsive to task distractors, and nonselective enhancement of responsiveness at cortical locations selective for target and control skin sites. A comparison of observations in prior studies with the observed plasticity effects suggests that the non-selective response enhancement and selective suppression suffice to explain known plasticity phenomena in simple spatial tasks. This work suggests that differential responsiveness to task targets and distractors in primary sensory cortex for a simple spatial detection and discrimination task arise from nonselective increases in response over a broad cortical locus that includes the representation of the task target, and selective suppression of responses to the task distractor within this locus.
This study investigated the reinforcing properties, limits, and motivating potentials of sensory stimuli with autistic children. In the first phase of the study, four intellectually retarded autistic children were exposed to three different types of sensory stimulation (vibration, music, and strobe light) as well as edible and social reinforcers for ten-second intervals contingent upon six simple bar pressing responses. In the second phase, the same events were used as reinforcers for correct responses in learning object labels. The results indicated that: (a) sensory stimuli can be used effectively as reinforcers to maintain high, durable rates of responding in a simple pressing task; (b) ranked preferences for sensory stimuli revealed a unique configuration of responding for each child; and (c) sensory stimuli have motivating potentials comparable to those of the traditional food and social reinforcers even when training receptive language tasks.
It is well documented that East Asians differ from Westerners in conscious perception and attention. However, few studies have explored cultural differences in unconscious processes such as implicit learning.
The global-local Navon letters were adopted in the serial reaction time (SRT) task, during which Chinese and British participants were instructed to respond to global or local letters, to investigate whether culture influences what people acquire in implicit sequence learning. Our results showed that from the beginning British expressed a greater local bias in perception than Chinese, confirming a cultural difference in perception. Further, over extended exposure, the Chinese learned the target regularity better than the British when the targets were global, indicating a global advantage for Chinese in implicit learning. Moreover, Chinese participants acquired greater unconscious knowledge of an irrelevant regularity than British participants, indicating that the Chinese were more sensitive to contextual regularities than the British.
The results suggest that cultural biases can profoundly influence both what people consciously perceive and unconsciously learn.
Perception and action are the result of an integration of various sources of information, such as current sensory input, prior experience, or the context in which a stimulus occurs. Often, the interpretation is not trivial hence needs to be learned from the co-occurrence of stimuli. Yet, how do we combine such diverse information to guide our action? Here we use a distance production-reproduction task to investigate the influence of auxiliary, symbolic cues, sensory input, and prior experience on human performance under three different conditions that vary in the information provided. Our results indicate that subjects can (1) learn the mapping of a verbal, symbolic cue onto the stimulus dimension and (2) integrate symbolic information and prior experience into their estimate of displacements. The behavioral results are explained by to two distinct generative models that represent different structural approaches of how a Bayesian observer would combine prior experience, sensory input, and symbolic cue information into a single estimate of displacement. The first model interprets the symbolic cue in the context of categorization, assuming that it reflects information about a distinct underlying stimulus range (categorical model). The second model applies a multi-modal integration approach and treats the symbolic cue as additional sensory input to the system, which is combined with the current sensory measurement and the subjects’ prior experience (cue-combination model). Notably, both models account equally well for the observed behavior despite their different structural assumptions. The present work thus provides evidence that humans can interpret abstract symbolic information and combine it with other types of information such as sensory input and prior experience. The similar explanatory power of the two models further suggest that issues such as categorization and cue-combination could be explained by alternative probabilistic approaches.
pre-cueing; path integration; cue-combination; multi-modal; categorization; experience-dependent prior; magnitude reproduction; iterative Bayes
In the serial reaction time task (SRTT), a sequence of visuo-spatial cues instructs subjects to perform a sequence of movements which follow a repeating pattern. Though motor responses are known to support implicit sequence learning in this task, the goal of the present experiments is to determine whether observation of the sequence of cues alone can also yield evidence of implicit sequence learning. This question has been difficult to answer because in previous research, performance improvements which appeared to be due to implicit perceptual sequence learning could also be due to spontaneous increases in explicit knowledge of the sequence. The present experiments use probabilistic sequences to prevent the spontaneous development of explicit awareness. They include a training phase, during which half of the subjects observe and the other half respond, followed by a transfer phase in which everyone responds. Results show that observation alone can support sequence learning, which translates at transfer into equivalent performance as that of a group who made motor responses during training. However, perceptual learning or its expression is sensitive to changes in target colors, and its expression is impaired by concurrent explicit search. Motor-response based learning is not affected by these manipulations. Thus, observation alone can support implicit sequence learning, even of higher order probabilistic sequences. However, perceptual learning can be prevented or concealed by variations of stimuli or task demands.
Implicit; Explicit; Perceptual Learning; Sequence Learning; Motor Learning
In the current paper, we first evaluate the suitability of traditional serial
reaction time (SRT) and artificial grammar learning (AGL) experiments for
measuring implicit learning of social signals. We then report the results of a
novel sequence learning task which combines aspects of the SRT and AGL paradigms
to meet our suggested criteria for how implicit learning experiments can be
adapted to increase their relevance to situations of social intuition. The
sequences followed standard finite-state grammars. Sequence learning and
consciousness of acquired knowledge were compared between 2 groups of 24
participants viewing either sequences of individually presented letters or
sequences of body-posture pictures, which were described as series of yoga
movements. Participants in both conditions showed above-chance classification
accuracy, indicating that sequence learning had occurred in both stimulus
conditions. This shows that sequence learning can still be found when learning
procedures reflect the characteristics of social intuition. Rule awareness was
measured using trial-by-trial evaluation of decision strategy (Dienes & Scott, 2005; Scott & Dienes, 2008). For letters,
sequence classification was best on trials where participants reported
responding on the basis of explicit rules or memory, indicating some explicit
learning in this condition. For body-posture, classification was not above
chance on these types of trial, but instead showed a trend to be best on those
trials where participants reported that their responses were based on intuition,
familiarity, or random choice, suggesting that learning was more implicit.
Results therefore indicate that the use of traditional stimuli in research on
sequence learning might underestimate the extent to which learning is implicit
in domains such as social learning, contributing to ongoing debate about
levels of conscious awareness in implicit learning.
implicit learning; social intuition; intuition; artificial grammar learning; human movement; consciousness; fringe consciousness
The basal ganglia are thought to participate in implicit sequence learning. However, the exact nature of this role has been difficult to determine in light of the conflicting evidence on implicit learning in subjects with Parkinson’s disease (PD). We examined the performance of PD subjects using a modified form of the serial reaction time task, which ensured that learning remained implicit. Subjects with predominantly right-sided symptoms were trained on a 12-element sequence using the right hand. Although there was no evidence of sequence learning on the basis of response time savings, the subjects showed knowledge of the sequence when performance was assessed in terms of the number of errors made. This effect transferred to the left (untrained) hand as well. Thus, these data demonstrate that PD patients are not impaired at implicitly learning sequential order, but rather at the translation of sequence knowledge into rapid motor performance. Furthermore, the results suggest that the basal ganglia are not essential for implicit sequence learning in PD.
sequence learning; parkinson’s disease; implicit; serial reaction time
Much research has been conducted aimed at the representations and mechanisms that
enable learning of sequential structures. A central debate concerns the question
whether item-item associations (i.e., in the sequence A-B-C-D,
B comes after A) or associations of item
and serial list position (i.e., B is the second item in the
list) are used to represent serial order. Previously, we showed that in a
variant of the implicit serial reaction time task, the sequence representation
contains associations between serial position and item information (Schuck, Gaschler, Keisler, & Frensch,
2011). Here, we applied models and research methods from working
memory research to implicit serial learning to replicate and extend our
findings. The experiment involved three sessions of sequence learning. Results
support the view that participants acquire knowledge about order structure
(item-item associations) and about ordinal structure (serial position-item
associations). Analyses suggest that only the simultaneous use of the two types
of knowledge acquisition can explain learning-related performance increases.
Additionally, our results indicate that serial list position information plays a
role very early in learning and that inter-item associations increasingly
control behavior in later stages.
implicit sequence learning; serial order; SRT; chaining; race model
Over the last 20 years researchers have used the serial reaction time (SRT) task
to investigate the nature of spatial sequence learning. They have used the task
to identify the locus of spatial sequence learning, identify situations that
enhance and those that impair learning, and identify the important cognitive
processes that facilitate this type of learning. Although controversies remain,
the SRT task has been integral in enhancing our understanding of implicit
sequence learning. It is important, however, to ask what, if anything, the
discoveries made using the SRT task tell us about implicit learning more
generally. This review analyzes the state of the current spatial SRT sequence
learning literature highlighting the stimulus-response rule hypothesis of
sequence learning which we believe provides a unifying account of discrepant SRT
data. It also challenges researchers to use the vast body of knowledge acquired
with the SRT task to understand other implicit learning literatures too often
ignored in the context of this particular task. This broad perspective will make
it possible to identify congruences among data acquired using various different
tasks that will allow us to generalize about the nature of implicit
sequence learning; implicit learning; serial reaction time task
The current study examined the functional role redundant amodal information plays in an operant learning task in 5-month-old human infants. Prior studies have suggested that both simple and complex learning processes (discrimination, associative conditioning) are facilitated when amodal information is presented redundantly across sensory modalities. These studies, however, did not test whether the amodal information had to be similar across modalities for facilitation to occur. The current study examined how both matching and mismatching redundant amodal information about the shape of an object would influence learning of an operant response in human infants. Infants learned an operant kick response to move a mobile of cylinders while either holding a cylinder, a rectangular cube, or no object. Kick rate served as the dependent measure. The results showed that infants given mismatching redundant amodal information (e.g., viewed cylinders while holding a rectangular cube) showed inhibited operant learning. These results extend the Intersensory Redundancy Hypothesis by demonstrating that amodal redundancy can function in some instances to inhibit complex learning processes.
Amodal redundancy; Operant learning; Inhibition; Multisensory processing; Shape; Human infants
The reuniens (Re) and rhomboid (Rh) nuclei are major sources of thalamic input to hippocampus and medial prefrontal cortex. We compared effects of lesions in ReRh and other parts of the midline-intralaminar complex on tasks affected by lesions in terminal fields innervated by these nuclei, including: visuospatial reaction time (VSRT), a measure of sensory guided responding; serial VSRT, a measure of action sequence learning; and win/shift radial arm maze (RAM) measures of spatial memory. ReRh lesions affected RAM, but not VSRT or serial VSRT performance. The effects of caudal intralaminar lesions were doubly dissociated from ReRh lesions, affecting VSRT, but not RAM or serial VSRT performance. Rostral intralaminar lesions did not produce significant impairments, other than a subgroup with larger lesions that were impaired performing a delayed RAM task. Combined lesions damaging all three sites produced RAM deficits comparable to ReRh lesions and VSRT deficits comparable to caudal intralaminar lesions. Thus there was no indication that deficits produced by lesions in one site were exacerbated significantly by the cumulative effect of damage in other parts of the midline-intralaminar complex. The effects of ReRh lesions provide evidence that these nuclei affect memory functions of hippocampus and medial prefrontal cortex. The double dissociation observed between the effects of ReRh and caudal intralaminar nuclei provides evidence that different nuclei within the midline-intralaminar complex affect distinct aspects of cognition consistent with the effects of lesions in the terminal fields they innervate.
diencephalic amnesia; midline and intralaminar thalamic nuclei; visuospatial reaction time; motor sequence learning
Implicit learning (IL) occurs unconsciously and without intention. Perceptual fluency is the ease of processing elicited by previous exposure to a stimulus. It has been assumed that perceptual fluency is associated with IL. However, the role of perceptual fluency following IL has not been investigated in temporal pattern learning. Two experiments by Schultz, Stevens, Keller, and Tillmann demonstrated the IL of auditory temporal patterns using a serial reaction-time task and a generation task based on the process dissociation procedure. The generation task demonstrated that learning was implicit in both experiments via motor fluency, that is, the inability to suppress learned information. With the aim to disentangle conscious and unconscious processes, we analyze unreported recognition data associated with the Schultz et al. experiments using the sequence identification measurement model. The model assumes that perceptual fluency reflects unconscious processes and IL. For Experiment 1, the model indicated that conscious and unconscious processes contributed to recognition of temporal patterns, but that unconscious processes had a greater influence on recognition than conscious processes. In the model implementation of Experiment 2, there was equal contribution of conscious and unconscious processes in the recognition of temporal patterns. As Schultz et al. demonstrated IL in both experiments using a generation task, and the conditions reported here in Experiments 1 and 2 were identical, two explanations are offered for the discrepancy in model and behavioral results based on the two tasks: 1) perceptual fluency may not be necessary to infer IL, or 2) conscious control over implicitly learned information may vary as a function of perceptual fluency and motor fluency.
Recent studies have reported abnormal implicit learning of sequential patterns in patients with schizophrenia. Because these studies were based on visuospatial cues, the question remained whether patients were impaired simply due to the demands of spatial processing. This study examined implicit sequence learning in 24 patients with schizophrenia and 24 healthy controls using a non-spatial variation of the serial reaction time test (SRT) in which pattern stimuli alternated with random stimuli on every other trial. Both groups showed learning by responding faster and more accurately to pattern trials than to random trials. Patients, however, showed a smaller magnitude of sequence learning. Both groups were unable to demonstrate explicit knowledge of the nature of the pattern, confirming that learning occurred without awareness. Clinical variables were not correlated with the patients' learning deficits. Patients with schizophrenia have a decreased ability to develop sensitivity to regularly occurring sequences of events within their environment. This type of deficit may affect an array of cognitive and motor functions that rely on the perception of event regularity.
Serial learning; Motor skills; Cognition; Psychiatry; Memory; Behavior
Patients with amnesia have deficits in declarative memory but intact memory for motor and perceptual skills, which suggests that explicit memory and implicit memory are distinct. However, the evidence that implicit motor learning is intact in amnesic patients is contradictory. This study investigated implicit sequence learning in amnesic patients with Korsakoff’s syndrome (N = 20) and matched controls (N = 14), using the classical Serial Reaction Time Task and a newly developed Pattern Learning Task in which the planning and execution of the responses are more spatially demanding. Results showed that implicit motor learning occurred in both groups of participants; however, on the Pattern Learning Task, the percentage of errors did not increase in the Korsakoff group in the random test phase, which is indicative of less implicit learning. Thus, our findings show that the performance of patients with Korsakoff’s syndrome is compromised on an implicit learning task with a strong spatial response component.
Korsakoff’s syndrome; Amnesia; Implicit learning; Motor learning; Sequence learning; Memory
The retrosplenial cortex (RSP) is highly interconnected with medial temporal lobe structures, yet relatively little is known about its specific contributions to learning and memory. One possibility is that RSP is involved in forming associations between multiple sensory stimuli. Indeed, damage to RSP disrupts learning about spatial or contextual cues and also impairs learning about co-occurring conditioned stimuli (CSs). Two experiments were conducted to test this notion more rigorously. In Experiment 1, rats were trained in a serial feature negative discrimination task consisting of reinforced presentations of a tone alone and non-reinforced serial presentations of a light followed by the tone. Thus, in contrast to prior studies, this paradigm involved serial presentation of conditioned stimuli (CS), rather than simultaneous presentation. Rats with damage to RSP failed to acquire the discrimination, indicating that RSP is required for forming associations between sensory stimuli regardless of whether they occur serially or simultaneously. In Experiment 2, a sensory preconditioning task was used to determine if RSP was necessary for forming associations between stimuli even in the absence of reinforcement. During the first phase of this procedure, one auditory stimulus was paired with a light while a second auditory stimulus was presented alone. In the next phase of training, the same light was paired with food. During the final phase of the procedure both auditory stimuli were presented alone during a single session. Control, but not RSP-lesioned rats, exhibited more food cup behavior following presentation of the auditory cue that was previously paired with light compared to the unpaired auditory stimulus, indicating that a stimulus-stimulus association was formed during the first phase of training. These results support the idea that RSP has a fundamental role in forming associations between environmental stimuli.
medial temporal lobe; ambiguity; compound stimuli; conditioned inhibition; cingulate
Recent research of task-irrelevant perceptual learning (TIPL) demonstrates that stimuli that are consistently presented at relevant point in times (e.g. with task-targets or rewards) are learned, even in the absence of attention to these stimuli. However, different research paradigms have observed different results for how salient stimuli are learned; with some studies showing no learning, some studies showing positive learning and others showing negative learning effects. In this paper we focused on how the level of processing of stimuli impacts fast-TIPL. We conducted three different experiments in which the level of processing of the information paired with a target was manipulated. Our results indicated that fast-TIPL occurs when participants have to memorize the information presented with the target, but also when they just have to process this information for a secondary task without an explicit memorization of those stimuli. However, fast-TIPL does not occur when participants have to ignore the target-paired information. This observation is consistent with recent models of TIPL that suggest that attentional signals can either enhance or suppress learning depending on whether those stimuli are distracting or not to the subjects' objectives. Our results also revealed a robust gender effect in fast-TIPL, where male subjects consistently show fast-TIPL, whereas the observation of fast-TIPL is inconsistent in female subjects.
Objective: The purpose of this study was to investigate the effects of specific types of tasks on the efficiency of implicit procedural learning in the presence of developmental dyslexia (DD).
Methods: Sixteen children with DD (mean (SD) age 11.6 (1.4) years) and 16 matched normal reader controls (mean age 11.4 (1.9) years) were administered two tests (the Serial Reaction Time test and the Mirror Drawing test) in which implicit knowledge was gradually acquired across multiple trials. Although both tests analyse implicit learning abilities, they tap different competencies. The Serial Reaction Time test requires the development of sequential learning and little (if any) procedural learning, whereas the Mirror Drawing test involves fast and repetitive processing of visuospatial stimuli but no acquisition of sequences.
Results: The children with DD were impaired on both implicit learning tasks, suggesting that the learning deficit observed in dyslexia does not depend on the material to be learned (with or without motor sequence of response action) but on the implicit nature of the learning that characterises the tasks.
Conclusion: Individuals with DD have impaired implicit procedural learning.
Implicit learning is often assumed to be an effortless process. However, some
artificial grammar learning and sequence learning studies using dual tasks seem
to suggest that attention is essential for implicit learning to occur. This
discrepancy probably results from the specific type of secondary task that is
used. Different secondary tasks may engage attentional resources differently and
therefore may bias performance on the primary task in different ways. Here, we
used a random number generation (RNG) task, which may allow for a closer
monitoring of a participant’s engagement in a secondary task than the popular
secondary task in sequence learning studies: tone counting (TC). In the first
two experiments, we investigated the interference associated with performing RNG
concurrently with a serial reaction time (SRT) task. In a third experiment, we
compared the effects of RNG and TC. In all three experiments, we directly
evaluated participants’ knowledge of the sequence with a subsequent sequence
generation task. Sequence learning was consistently observed in all experiments,
but was impaired under dual-task conditions. Most importantly, our data suggest
that RNG is more demanding and impairs learning to a greater extent than TC.
Nevertheless, we failed to observe effects of the secondary task in subsequent
sequence generation. Our studies indicate that RNG is a promising task to
explore the involvement of attention in the SRT task.
implicit learning; attention; serial reaction time task; random number generation task; tone counting task
Natural objects provide partially redundant information to the brain through different sensory modalities. For example, voices and faces both give information about the speech content, age, and gender of a person. Thanks to this redundancy, multimodal recognition is fast, robust, and automatic. In unimodal perception, however, only part of the information about an object is available. Here, we addressed whether, even under conditions of unimodal sensory input, crossmodal neural circuits that have been shaped by previous associative learning become activated and underpin a performance benefit. We measured brain activity with functional magnetic resonance imaging before, while, and after participants learned to associate either sensory redundant stimuli, i.e. voices and faces, or arbitrary multimodal combinations, i.e. voices and written names, ring tones, and cell phones or brand names of these cell phones. After learning, participants were better at recognizing unimodal auditory voices that had been paired with faces than those paired with written names, and association of voices with faces resulted in an increased functional coupling between voice and face areas. No such effects were observed for ring tones that had been paired with cell phones or names. These findings demonstrate that brief exposure to ecologically valid and sensory redundant stimulus pairs, such as voices and faces, induces specific multisensory associations. Consistent with predictive coding theories, associative representations become thereafter available for unimodal perception and facilitate object recognition. These data suggest that for natural objects effective predictive signals can be generated across sensory systems and proceed by optimization of functional connectivity between specialized cortical sensory modules.
This study illustrates that recognition of natural objects under conditions where only one sensory modality is available can rely on implicit access to multisensory representations of the stimulus.
How does the brain determine what to learn and what not to learn? Previous studies showed that a feature or stimulus on which subjects performed a task was learned, while the features or stimuli that were irrelevant to the task were not learned. This led some researchers to conclude that attention to a stimulus was necessary for the stimulus to be learned. This thought was challenged by the discovery of a task-irrelevant perceptual learning, in which learning occurred by mere exposure to the unattended and subthreshold stimulus. However, this exposure-based learning does not necessarily indicate that all presented stimuli are learned. Rather, recent studies showed that the occurrence of this learning was very selective for the following new findings: unattended stimulus learning occurred only (1) when the unattended stimulus was associated temporally with the processing of an attended target, (2) when the unattended stimulus was synchronously presented with reinforcers, such as internal or external rewards, and (3) when the unattended stimulus had subliminal properties. These selectivities suggest some degrees of similarity between task-relevant and task-irrelevant perceptual learning, which has been the motivation for making a united model in which both task-relevant and task-irrelevant learning are formed with similar or same mechanisms.
perceptual learning; task-irrelevant perceptual learning; exposure-based perceptual learning
Knowledge of sequential relationships enables future events to be anticipated and processed efficiently. Research with the serial reaction time task (SRTT) has shown that sequence learning often occurs implicitly without effort or awareness. Here we report four experiments that use a triplet-learning task (TLT) to investigate sequence learning in young and older adults. In the TLT people respond only to the last target event in a series of discrete, three-event sequences or triplets. Target predictability is manipulated by varying the triplet frequency (joint probability) and/or the statistical relationships (conditional probabilities) among events within the triplets. Results revealed that both groups learned, though older adults showed less learning of both joint and conditional probabilities. Young people used the statistical information in both cues, but older adults relied primarily on information in the second cue alone. We conclude that the TLT complements and extends the SRTT and other tasks by offering flexibility in the kinds of sequential statistical regularities that may be studied as well as by controlling event timing and eliminating motor response sequencing.
Implicit sequence learning; aging; implicit learning; serial reaction time task; statistical learning
Animals must continuously evaluate sensory information to select the preferable among possible actions in a given context, including the option to wait for more information before committing to another course of action. In experimental sensory decision tasks that replicate these features, reaction time distributions can be informative about the implicit rules by which animals determine when to commit and what to do. We measured reaction times of Long-Evans rats discriminating the direction of motion in a coherent random dot motion stimulus, using a self-paced two-alternative forced-choice (2-AFC) reaction time task. Our main findings are: (1) When motion strength was constant across trials, the error trials had shorter reaction times than correct trials; in other words, accuracy increased with response latency. (2) When motion strength was varied in randomly interleaved trials, accuracy increased with motion strength, whereas reaction time decreased. (3) Accuracy increased with reaction time for each motion strength considered separately, and in the interleaved motion strength experiment overall. (4) When stimulus duration was limited, accuracy improved with stimulus duration, whereas reaction time decreased. (5) Accuracy decreased with response latency after stimulus offset. This was the case for each stimulus duration considered separately, and in the interleaved duration experiment overall. We conclude that rats integrate visual evidence over time, but in this task the time of their response is governed more by elapsed time than by a criterion for sufficient evidence.