In the current paper, we first evaluate the suitability of traditional serial
reaction time (SRT) and artificial grammar learning (AGL) experiments for
measuring implicit learning of social signals. We then report the results of a
novel sequence learning task which combines aspects of the SRT and AGL paradigms
to meet our suggested criteria for how implicit learning experiments can be
adapted to increase their relevance to situations of social intuition. The
sequences followed standard finite-state grammars. Sequence learning and
consciousness of acquired knowledge were compared between 2 groups of 24
participants viewing either sequences of individually presented letters or
sequences of body-posture pictures, which were described as series of yoga
movements. Participants in both conditions showed above-chance classification
accuracy, indicating that sequence learning had occurred in both stimulus
conditions. This shows that sequence learning can still be found when learning
procedures reflect the characteristics of social intuition. Rule awareness was
measured using trial-by-trial evaluation of decision strategy (Dienes & Scott, 2005; Scott & Dienes, 2008). For letters,
sequence classification was best on trials where participants reported
responding on the basis of explicit rules or memory, indicating some explicit
learning in this condition. For body-posture, classification was not above
chance on these types of trial, but instead showed a trend to be best on those
trials where participants reported that their responses were based on intuition,
familiarity, or random choice, suggesting that learning was more implicit.
Results therefore indicate that the use of traditional stimuli in research on
sequence learning might underestimate the extent to which learning is implicit
in domains such as social learning, contributing to ongoing debate about
levels of conscious awareness in implicit learning.
implicit learning; social intuition; intuition; artificial grammar learning; human movement; consciousness; fringe consciousness
In the serial reaction time task (SRTT), a sequence of visuo-spatial cues instructs subjects to perform a sequence of movements which follow a repeating pattern. Though motor responses are known to support implicit sequence learning in this task, the goal of the present experiments is to determine whether observation of the sequence of cues alone can also yield evidence of implicit sequence learning. This question has been difficult to answer because in previous research, performance improvements which appeared to be due to implicit perceptual sequence learning could also be due to spontaneous increases in explicit knowledge of the sequence. The present experiments use probabilistic sequences to prevent the spontaneous development of explicit awareness. They include a training phase, during which half of the subjects observe and the other half respond, followed by a transfer phase in which everyone responds. Results show that observation alone can support sequence learning, which translates at transfer into equivalent performance as that of a group who made motor responses during training. However, perceptual learning or its expression is sensitive to changes in target colors, and its expression is impaired by concurrent explicit search. Motor-response based learning is not affected by these manipulations. Thus, observation alone can support implicit sequence learning, even of higher order probabilistic sequences. However, perceptual learning can be prevented or concealed by variations of stimuli or task demands.
Implicit; Explicit; Perceptual Learning; Sequence Learning; Motor Learning
When a perturbation is applied in a sensorimotor transformation task, subjects can adapt and maintain performance by either relying on sensory feedback, or, in the absence of such feedback, on information provided by rewards. For example, in a classical rotation task where movement endpoints must be rotated to reach a fixed target, human subjects can successfully adapt their reaching movements solely on the basis of binary rewards, although this proves much more difficult than with visual feedback. Here, we investigate such a reward-driven sensorimotor adaptation process in a minimal computational model of the task. The key assumption of the model is that synaptic plasticity is gated by the reward. We study how the learning dynamics depend on the target size, the movement variability, the rotation angle and the number of targets. We show that when the movement is perturbed for multiple targets, the adaptation process for the different targets can interfere destructively or constructively depending on the similarities between the sensory stimuli (the targets) and the overlap in their neuronal representations. Destructive interferences can result in a drastic slowdown of the adaptation. As a result of interference, the time to adapt varies non-linearly with the number of targets. Our analysis shows that these interferences are weaker if the reward varies smoothly with the subject's performance instead of being binary. We demonstrate how shaping the reward or shaping the task can accelerate the adaptation dramatically by reducing the destructive interferences. We argue that experimentally investigating the dynamics of reward-driven sensorimotor adaptation for more than one sensory stimulus can shed light on the underlying learning rules.
The brain has a robust ability to adapt to external perturbations imposed on acquired sensorimotor transformations. Here, we used a mathematical model to investigate the reward-based component in sensorimotor adaptations. We show that the shape of the delivered reward signal, which in experiments is usually binary to indicate success or failure, affects the adaptation dynamics. We demonstrate how the ability to adapt to perturbations by relying solely on binary rewards depends on motor variability, size of perturbation and the threshold for delivering the reward. When adapting motor responses to multiple sensory stimuli simultaneously, on-line interferences between the motor performance in response to the different stimuli occur as a result of the overlap in the neural representation of the sensory stimuli, as well as the physical distance between them. Adaptation may be extremely slow when perturbations are induced to a few stimuli that are physically different from each other because of destructive interferences. When intermediate stimuli are introduced, the physical distance between neighbor stimuli is reduced, and constructive interferences can emerge, resulting in faster adaptation. Remarkably, adaptation to a widespread sensorimotor perturbation is accelerated by increasing the number of sensory stimuli during training, i.e. learning is faster if one learns more.
When different perceptual signals arising from the same physical entity are integrated, they form a more reliable sensory estimate. When such repetitive sensory signals are pitted against other competing stimuli, such as in a Stroop Task, this redundancy may lead to stronger processing that biases behavior toward reporting the redundant stimuli. This bias would therefore, be expected to evoke greater incongruency effects than if these stimuli did not contain redundant sensory features. In the present paper we report that this is not the case for a set of three crossmodal, auditory-visual Stroop tasks. In these tasks participants attended to, and reported, either the visual or the auditory stimulus (in separate blocks) while ignoring the other, unattended modality. The visual component of these stimuli could be purely semantic (words), purely perceptual (colors), or the combination of both. Based on previous work showing enhanced crossmodal integration and visual search gains for redundantly coded stimuli, we had expected that relative to the single features, redundant visual features would have induced both greater visual distracter incongruency effects for attended auditory targets, and been less influenced by auditory distracters for attended visual targets. Overall, reaction times were faster for visual targets and were dominated by behavioral facilitation for the cross-modal interactions (relative to interference), but showed surprisingly little influence of visual feature redundancy. Post-hoc analyses revealed modest and trending evidence for possible increases in behavioral interference for redundant visual distracters on auditory targets, however, these effects were substantially smaller than anticipated and were not accompanied by a redundancy effect for behavioral facilitation or for attended visual targets.
multisensory conflict; stroop task; redundancy gains; stimulus onset asynchrony (SOA)
The trade-off between speed and accuracy of sensory discrimination has most often been studied using sensory stimuli that evolve over time, such as random dot motion discrimination tasks. We previously reported that when rats perform motion discrimination, correct trials have longer reaction times than errors, accuracy increases with reaction time, and reaction time increases with stimulus ambiguity. In such experiments, new sensory information is continually presented, which could partly explain interactions between reaction time and accuracy. The present study shows that a changing physical stimulus is not essential to those findings. Freely behaving rats were trained to discriminate between two static visual images in a self-paced, two-alternative forced-choice reaction time task. Each trial was initiated by the rat, and the two images were presented simultaneously and persisted until the rat responded, with no time limit. Reaction times were longer in correct trials than in error trials, and accuracy increased with reaction time, comparable to results previously reported for rats performing motion discrimination. In the motion task, coherence has been used to vary discrimination difficulty. Here morphs between the previously learned images were used to parametrically vary the image similarity. In randomly interleaved trials, rats took more time on average to respond in trials in which they had to discriminate more similar stimuli. For both the motion and image tasks, the dependence of reaction time on ambiguity is weak, as if rats prioritized speed over accuracy. Therefore we asked whether rats can change the priority of speed and accuracy adaptively in response to a change in reward contingencies. For two rats, the penalty delay was increased from 2 to 6 s. When the penalty was longer, reaction times increased, and accuracy improved. This demonstrates that rats can flexibly adjust their behavioral strategy in response to the cost of errors.
decision making; sequential decision; speed–accuracy trade-off; rodent vision; visual behavior; perceptual decision; choice
Animals discriminate stimuli, learn their predictive value and use this knowledge to modify their behavior. In Drosophila, the mushroom body (MB) plays a key role in these processes. Sensory stimuli are sparsely represented by ∼2000 Kenyon cells, which converge onto 34 output neurons (MBONs) of 21 types. We studied the role of MBONs in several associative learning tasks and in sleep regulation, revealing the extent to which information flow is segregated into distinct channels and suggesting possible roles for the multi-layered MBON network. We also show that optogenetic activation of MBONs can, depending on cell type, induce repulsion or attraction in flies. The behavioral effects of MBON perturbation are combinatorial, suggesting that the MBON ensemble collectively represents valence. We propose that local, stimulus-specific dopaminergic modulation selectively alters the balance within the MBON network for those stimuli. Our results suggest that valence encoded by the MBON ensemble biases memory-based action selection.
An animal's survival depends on its ability to respond appropriately to its environment, approaching stimuli that signal rewards and avoiding any that warn of potential threats. In fruit flies, this behavior requires activity in a region of the brain called the mushroom body, which processes sensory information and uses that information to influence responses to stimuli.
Aso et al. recently mapped the mushroom body of the fruit fly in its entirety. This work showed, among other things, that the mushroom body contained 21 different types of output neurons. Building on this work, Aso et al. have started to work out how this circuitry enables flies to learn to associate a stimulus, such as an odor, with an outcome, such as the presence of food.
Two complementary techniques—the use of molecular genetics to block neuronal activity, and the use of light to activate neurons (a technique called optogenetics)—were employed to study the roles performed by the output neurons in the mushroom body. Results revealed that distinct groups of output cells must be activated for flies to avoid—as opposed to approach—odors. Moreover, the same output neurons are used to avoid both odors and colors that have been associated with punishment. Together, these results indicate that the output cells do not encode the identity of stimuli: rather, they signal whether a stimulus should be approached or avoided. The output cells also regulate the amount of sleep taken by the fly, which is consistent with the mushroom body having a broader role in regulating the fly's internal state.
The results of these experiments—combined with new knowledge about the detailed structure of the mushroom body—lay the foundations for new studies that explore associative learning at the level of individual circuits and their component cells. Given that the organization of the mushroom body has much in common with that of the mammalian brain, these studies should provide insights into the fundamental principles that underpin learning and memory in other species, including humans.
mushroom body; memory; behavioral valence; sleep; population code; action selection; D. melanogaster
Daily our central nervous system receives inputs via several sensory modalities, processes them and integrates information in order to produce a suitable behavior. The amazing part is that such a multisensory integration brings all information into a unified percept. An approach to start investigating this property is to show that perception is better and faster when multimodal stimuli are used as compared to unimodal stimuli. This forms the first part of the present study conducted in a non-human primate’s model (n = 2) engaged in a detection sensory-motor task where visual and auditory stimuli were displayed individually or simultaneously. The measured parameters were the reaction time (RT) between stimulus and onset of arm movement, successes and errors percentages, as well as the evolution as a function of time of these parameters with training. As expected, RTs were shorter when the subjects were exposed to combined stimuli. The gains for both subjects were around 20 and 40 ms, as compared with the auditory and visual stimulus alone, respectively. Moreover the number of correct responses increased in response to bimodal stimuli. We interpreted such multisensory advantage through redundant signal effect which decreases perceptual ambiguity, increases speed of stimulus detection, and improves performance accuracy. The second part of the study presents single-unit recordings derived from the premotor cortex (PM) of the same subjects during the sensory-motor task. Response patterns to sensory/multisensory stimulation are documented and specific type proportions are reported. Characterization of bimodal neurons indicates a mechanism of audio-visual integration possibly through a decrease of inhibition. Nevertheless the neural processing leading to faster motor response from PM as a polysensory association cortical area remains still unclear.
sensory-motor; detection task; non-human primate; facilitatory effect; electrophysiology
The diversity of cutaneous sensory afferents has been studied by many investigators using behavioral, physiologic, molecular, and genetic approaches. Largely missing, thus far, is an analysis of the complete morphologies of individual afferent arbors. Here we present a survey of cutaneous sensory arbor morphologies in hairy skin of the mouse using genetically-directed sparse labeling with a sensory neuron-specific alkaline phosphatase reporter. Quantitative analyses of 719 arbors, among which 77 were fully reconstructed, reveal 10 morphologically distinct types. Among the two types with the largest arbors, one contacts ∼200 hair follicles with circumferential endings and a second is characterized by a densely ramifying arbor with one to several thousand branches and a total axon length between one-half and one meter. These observations constrain models of receptive field size and structure among cutaneous sensory neurons, and they raise intriguing questions regarding the cellular and developmental mechanisms responsible for this morphological diversity.
Sensory neurons carry information from sensory cells in the eyes, ears and other sensory organs to the brain and spinal cord so that they can coordinate the body's response to its environment and various stimuli. The sensory organs responsible for four of the traditional senses—vision, hearing, smell and taste—are relatively small and self-contained: however, the sensory organ responsible for touch is as big as the body itself. Moreover, a variety of many different types of sensory cells in the skin allow the body to respond to temperature, pain, itches and a range of other external stimuli.
Despite more than a century of research, relatively little is known about the morphology of the complex networks (arbors) of sensory neurons that send signals towards the central nervous system. This is mainly due to difficulties involved in imaging intact skin, the way that different arbors overlap and intermingle, and the relatively large distances that separate the bodies of neuronal cells and the farthest reaches of their arbors.
Wu et al. employed an imaging method that exploits the Cre-Lox system that is already widely used in genetics. In this approach a Cre enzyme is used to remove a region of DNA that is flanked by two genetically engineered Lox sequences. Wu et al. used a gene that codes for an enzyme marker (alkaline phosphatase) that previous investigators had into the DNA of mice. The gene was inserted in such a way that it was only expressed in sensory neurons that innervate the skin when Cre-Lox recombination had removed an adjacent segment of DNA. Moreover, Wu et al. used this reporter gene in combination with a modified Cre enzyme that only enters the nuclei of cells in the presence of a drug (Tamoxifen), so the probability that the marker gene is expressed is determined by the concentration of Tamoxifen. By administering a low level of Tamoxifen to pregnant mice, it was possible to label a very small number of sensory neurons in each embryo. Individual neurons that express the alkaline phosphatase marker were visualized with a histochemical reaction that rendered them dark purple. The remainder of the tissue remained unstained.
Based on quantitative analyses of the morphologies of more than 700 arbors, Wu et al. identified 10 distinct types of neurons. Of the two types of neurons with the largest arbors, one makes contact with ∼200 hair follicles, with the nerve endings completely encircling the follicles; the other type of arbor contains several thousand branches, with a total length for all of the branches summing to as much as one meter in length. The next challenge is to study the morphologies of neurons in tissues other than the skin, and also the neurons involved in other sensory systems, and to explore the cellular and developmental mechanisms responsible for the morphological diversity found in these initial experiments.
skin; neuronal morphology; sparse labeling; receptive field; Brn3a; Mouse
Sensory information about the outside world is encoded by neurons in sequences of discrete, identical pulses termed action potentials or spikes. There is persistent controversy about the extent to which the precise timing of these spikes is relevant to the function of the brain. We revisit this issue, using the motion-sensitive neurons of the fly visual system as a test case. Our experimental methods allow us to deliver more nearly natural visual stimuli, comparable to those which flies encounter in free, acrobatic flight. New mathematical methods allow us to draw more reliable conclusions about the information content of neural responses even when the set of possible responses is very large. We find that significant amounts of visual information are represented by details of the spike train at millisecond and sub-millisecond precision, even though the sensory input has a correlation time of ∼55 ms; different patterns of spike timing represent distinct motion trajectories, and the absolute timing of spikes points to particular features of these trajectories with high precision. Finally, the efficiency of our entropy estimator makes it possible to uncover features of neural coding relevant for natural visual stimuli: first, the system's information transmission rate varies with natural fluctuations in light intensity, resulting from varying cloud cover, such that marginal increases in information rate thus occur even when the individual photoreceptors are counting on the order of one million photons per second. Secondly, we see that the system exploits the relatively slow dynamics of the stimulus to remove coding redundancy and so generate a more efficient neural code.
Neurons communicate by means of stereotyped pulses, called action potentials or spikes, and a central issue in systems neuroscience is to understand this neural coding. Here we study how sensory information is encoded in sequences of spikes, using a combination of novel theoretical and experimental techniques. With motion detection in the blowfly as a model system, we perform experiments in an environment maximally similar to the natural one. We report a number of unexpected, striking observations about the structure of the neural code in this system: First, the timing of spikes is important with a precision roughly two orders of magnitude greater than the temporal dynamics of the stimulus. Second, the fly goes a long way to utilize the redundancy in the stimulus in order to optimize the neural code and encode more refined features than would be possible otherwise. This implies that the neural code, even in low-level vision, may be significantly context dependent.
Visual experience in developing tadpoles spatially organizes neuronal receptive fields and improves network-level representation of visual stimuli.
Sensory experience drives dramatic structural and functional plasticity in developing neurons. However, for single-neuron plasticity to optimally improve whole-network encoding of sensory information, changes must be coordinated between neurons to ensure a full range of stimuli is efficiently represented. Using two-photon calcium imaging to monitor evoked activity in over 100 neurons simultaneously, we investigate network-level changes in the developing Xenopus laevis tectum during visual training with motion stimuli. Training causes stimulus-specific changes in neuronal responses and interactions, resulting in improved population encoding. This plasticity is spatially structured, increasing tuning curve similarity and interactions among nearby neurons, and decreasing interactions among distant neurons. Training does not improve encoding by single clusters of similarly responding neurons, but improves encoding across clusters, indicating coordinated plasticity across the network. NMDA receptor blockade prevents coordinated plasticity, reduces clustering, and abolishes whole-network encoding improvement. We conclude that NMDA receptors support experience-dependent network self-organization, allowing efficient population coding of a diverse range of stimuli.
In the developing brain, sensory experience can extensively re-wire neurons, determining both their shape and function. It is thought that this early period of plasticity improves the brain's representation of sensory input. For this plasticity to actually improve coding efficiency, changes to individual neurons should be coordinated across the brain to produce a network-level functional organization. In this study, we measure such network-level changes during visual learning in developing Xenopus laevis (frog) tadpoles. By imaging neuronal calcium levels, we track activity in over 100 neurons simultaneously to observe changes in both single neurons and whole networks during training. We find that the network improves its representation of visual stimuli over time, by forming spatial clusters of highly connected, similarly responding neurons. Distant neurons, however, become less connected. This organization improves the ability of large groups of neurons, spanning multiple clusters, to discriminate the trained stimuli. Finally, we show that blockade of the NMDA receptor prevents this functional organization and the improvement in the network's stimulus representation. Our study shows how developmental plasticity can influence not only the proper connectivity of the visual system, but also its coding capacity.
Objective: The purpose of this study was to investigate the effects of specific types of tasks on the efficiency of implicit procedural learning in the presence of developmental dyslexia (DD).
Methods: Sixteen children with DD (mean (SD) age 11.6 (1.4) years) and 16 matched normal reader controls (mean age 11.4 (1.9) years) were administered two tests (the Serial Reaction Time test and the Mirror Drawing test) in which implicit knowledge was gradually acquired across multiple trials. Although both tests analyse implicit learning abilities, they tap different competencies. The Serial Reaction Time test requires the development of sequential learning and little (if any) procedural learning, whereas the Mirror Drawing test involves fast and repetitive processing of visuospatial stimuli but no acquisition of sequences.
Results: The children with DD were impaired on both implicit learning tasks, suggesting that the learning deficit observed in dyslexia does not depend on the material to be learned (with or without motor sequence of response action) but on the implicit nature of the learning that characterises the tasks.
Conclusion: Individuals with DD have impaired implicit procedural learning.
Adult learning-induced sensory cortex plasticity results in enhanced action potential rates in neurons that have the most relevant information for the task, or those that respond strongly to one sensory stimulus but weakly to its comparison stimulus. Current theories suggest this plasticity is caused when target stimulus evoked activity is enhanced by reward signals from neuromodulatory nuclei. Prior work has found evidence suggestive of nonselective enhancement of neural responses, and suppression of responses to task distractors, but the differences in these effects between detection and discrimination have not been directly tested. Using cortical implants, we defined physiological responses in macaque somatosensory cortex during serial, matched, detection and discrimination tasks. Nonselective increases in neural responsiveness were observed during detection learning. Suppression of responses to task distractors was observed during discrimination learning, and this suppression was specific to cortical locations that sampled responses to the task distractor before learning. Changes in receptive field size were measured as the area of skin that had a significant response to a constant magnitude stimulus, and these areal changes paralleled changes in responsiveness. From before detection learning until after discrimination learning, the enduring changes were selective suppression of cortical locations responsive to task distractors, and nonselective enhancement of responsiveness at cortical locations selective for target and control skin sites. A comparison of observations in prior studies with the observed plasticity effects suggests that the non-selective response enhancement and selective suppression suffice to explain known plasticity phenomena in simple spatial tasks. This work suggests that differential responsiveness to task targets and distractors in primary sensory cortex for a simple spatial detection and discrimination task arise from nonselective increases in response over a broad cortical locus that includes the representation of the task target, and selective suppression of responses to the task distractor within this locus.
We interact with the world through the assessment of available, but sometimes imperfect, sensory information. However, little is known about how variance in the quality of sensory information affects the regulation of controlled actions. In a series of three experiments, comprising a total of seven behavioral studies, we examined how different types of spatial frequency information affect underlying processes of response inhibition and selection. Participants underwent a stop-signal task, a two choice speed/accuracy balance experiment, and a variant of both these tasks where prior information was given about the nature of stimuli. In all experiments, stimuli were either intact, or contained only high-, or low- spatial frequencies. Overall, drift diffusion model analysis showed a decreased rate of information processing when spatial frequencies were removed, whereas the criterion for information accumulation was lowered. When spatial frequency information was intact, the cost of response inhibition increased (longer SSRT), while a correct response was produced faster (shorter reaction times) and with more certainty (decreased errors). When we manipulated the motivation to respond with a deadline (i.e., be fast or accurate), removal of spatial frequency information slowed response times only when instructions emphasized accuracy. However, the slowing of response times did not improve error rates, when compared to fast instruction trials. These behavioral studies suggest that the removal of spatial frequency information differentially affects the speed of response initiation, inhibition, and the efficiency to balance fast or accurate responses. More generally, the present results indicate a task-independent influence of basic sensory information on strategic adjustments in action control.
Behavioral strategies employed for chemotaxis have been described across phyla, but the sensorimotor basis of this phenomenon has seldom been studied in naturalistic contexts. Here, we examine how signals experienced during free olfactory behaviors are processed by first-order olfactory sensory neurons (OSNs) of the Drosophila larva. We find that OSNs can act as differentiators that transiently normalize stimulus intensity—a property potentially derived from a combination of integral feedback and feed-forward regulation of olfactory transduction. In olfactory virtual reality experiments, we report that high activity levels of the OSN suppress turning, whereas low activity levels facilitate turning. Using a generalized linear model, we explain how peripheral encoding of olfactory stimuli modulates the probability of switching from a run to a turn. Our work clarifies the link between computations carried out at the sensory periphery and action selection underlying navigation in odor gradients.
Fruit flies are attracted to the smell of rotting fruit, and use it to guide them to nearby food sources. However, this task is made more challenging by the fact that the distribution of scent or odor molecules in the air is constantly changing. Fruit flies therefore need to cope with, and exploit, this variation if they are to use odors as cues.
Odor molecules bind to receptors on the surface of nerve cells called olfactory sensory neurons, and trigger nerve impulses that travel along these cells. While many studies have investigated how fruit flies can distinguish between different odors, less is known about how animals can use variation in the strength of an odor to guide them towards its source.
Optogenetics is a technique that allows neuroscientists to control the activities of individual nerve cells, simply by shining light on to them. Because fruit fly larvae are almost transparent, optogenetics can be used on freely moving animals. Now, Schulze, Gomez-Marin et al. have used optogenetics in these larvae to trigger patterns of activity in individual olfactory sensory neurons that mimic the activity patterns elicited by real odors. These virtual realities were then used to study, in detail, some of the principles that control the sensory navigation of a larva—as it moves using a series of forward ‘runs’ and direction-changing ‘turns’.
Olfactory sensory neurons responded most strongly whenever light levels changed rapidly in strength (which simulated a rapid change in odor concentration). On the other hand, these neurons showed relatively little response to constant light levels (i.e., constant odors). This indicates that the activity of olfactory sensory neurons typically represents the rate of change in the concentration of an odor. An independent study by Kim et al. found that olfactory sensory neurons in adult fruit flies also respond in a similar way.
Schulze, Gomez-Marin et al. went on to show that the signals processed by a single type of olfactory sensory neuron could be used to predict a larva's behavior. Larvae tended to turn less when their olfactory sensory neurons were highly active. Low levels and inhibition of activity in the olfactory sensory neurons had the opposite effect; this promoted turning. It remains to be determined how this relatively simple control principle is implemented by the neural circuits that connect sensory neurons to the parts of a larva's nervous system that are involved with movement.
chemotaxis; olfaction; optogenetics; electrophysiology; sensorimotor control; computational modeling; D. melanogaster
Although there is now substantial evidence that developmental change occurs in implicit learning abilities over the lifespan, disparate results exist regarding the specific developmental trajectory of implicit learning skills. One possible reason for discrepancies across implicit learning studies may be that younger children show an increased sensitivity to variations in implicit learning task procedures and demands relative to adults. Studies using serial-reaction time (SRT) tasks have suggested that in adults, measurements of implicit learning are robust across variations in task procedures. Most classic SRT tasks have used response-contingent pacing in which the participant's own reaction time determines the duration of each trial. However, recent paradigms with adults and children have used fixed trial pacing, which leads to alterations in both response and attention demands, accuracy feedback, perceived agency, and task motivation for participants. In the current study, we compared learning on fixed-paced and self-paced versions of a spatial sequence learning paradigm in 4-year-old children and adults. Results indicated that preschool-aged children showed reduced evidence of implicit sequence learning in comparison to adults, regardless of the SRT paradigm used. In addition, we found the preschoolers showed significantly greater learning when stimulus presentation was self-paced. These data provide evidence for developmental differences in implicit sequence learning that are dependent on specific task demands such as stimulus pacing, which may be related to developmental changes in the impact of broader constructs such as attention and task motivation on implicit learning.
implicit sequence learning; serial reaction time paradigm; statistical learning; probabilistic learning; developmental invariance hypothesis
A fundamental task of a sensory system is to infer information about the environment. It has long been suggested that an important goal of the first stage of this process is to encode the raw sensory signal efficiently by reducing its redundancy in the neural representation. Some redundancy, however, would be expected because it can provide robustness to noise inherent in the system. Encoding the raw sensory signal itself is also problematic, because it contains distortion and noise. The optimal solution would be constrained further by limited biological resources. Here, we analyze a simple theoretical model that incorporates these key aspects of sensory coding, and apply it to conditions in the retina. The model specifies the optimal way to incorporate redundancy in a population of noisy neurons, while also optimally compensating for sensory distortion and noise. Importantly, it allows an arbitrary input-to-output cell ratio between sensory units (photoreceptors) and encoding units (retinal ganglion cells), providing predictions of retinal codes at different eccentricities. Compared to earlier models based on redundancy reduction, the proposed model conveys more information about the original signal. Interestingly, redundancy reduction can be near-optimal when the number of encoding units is limited, such as in the peripheral retina. We show that there exist multiple, equally-optimal solutions whose receptive field structure and organization vary significantly. Among these, the one which maximizes the spatial locality of the computation, but not the sparsity of either synaptic weights or neural responses, is consistent with known basic properties of retinal receptive fields. The model further predicts that receptive field structure changes less with light adaptation at higher input-to-output cell ratios, such as in the periphery.
Studies of the computational principles of sensory coding have largely focused on the redundancy reduction hypothesis, which posits that a neural population should encode the raw sensory signal efficiently by reducing its redundancy. Models based on this idea, however, have not taken into account some important aspects of sensory systems. First, neurons are noisy, and therefore, some redundancy in the code can be useful for transmitting information reliably. Second, the sensory signal itself is noisy, which should be counteracted as early as possible in the sensory pathway. Finally, neural resources such as the number of neurons are limited, which should strongly affect the form of the sensory code. Here we examine a simple model that takes all these factors into account. We find that the model conveys more information compared to redundancy reduction. When applied to the retina, the model provides a unified functional account for several known properties of retinal coding and makes novel predictions that have yet to be tested experimentally. The generality of the framework allows it to model a wide range of conditions and can be applied to predict optimal sensory coding in other systems.
Cancellation of redundant information is a highly desirable feature of sensory systems, since it would potentially lead to a more efficient detection of novel information. However, biologically plausible mechanisms responsible for such selective cancellation, and especially those robust to realistic variations in the intensity of the redundant signals, are mostly unknown. In this work, we study, via in vivo experimental recordings and computational models, the behavior of a cerebellar-like circuit in the weakly electric fish which is known to perform cancellation of redundant stimuli. We experimentally observe contrast invariance in the cancellation of spatially and temporally redundant stimuli in such a system. Our model, which incorporates heterogeneously-delayed feedback, bursting dynamics and burst-induced STDP, is in agreement with our in vivo observations. In addition, the model gives insight on the activity of granule cells and parallel fibers involved in the feedback pathway, and provides a strong prediction on the parallel fiber potentiation time scale. Finally, our model predicts the existence of an optimal learning contrast around 15% contrast levels, which are commonly experienced by interacting fish.
The ability to cancel redundant information is an important feature of many sensory systems. Cancellation mechanisms in neural systems, however, are not well understood, especially when considering realistic conditions such as signals with different intensities. In this work, we study, employing experimental recordings and computational models, a cerebellar-like circuit in the brain of the weakly electric fish which is able to perform such a cancellation. We observe that in vivo recorded neurons in this circuit display a contrast-invariant cancellation of redundant stimuli. We employ a mathematical model to explain this phenomenon, and also to gain insight into several dynamics of the circuit which have not been experimentally measured to date. Interestingly, our model predicts that time-averaged contrast levels of around 15%, which are commonly experienced by interacting fish, would shape the circuit to behave as observed experimentally.
In nature, animals form memories associating reward or punishment with stimuli from different sensory modalities, such as smells and colors. It is unclear, however, how distinct sensory memories are processed in the brain. We established appetitive and aversive visual learning assays for Drosophila that are comparable to the widely used olfactory learning assays. These assays share critical features, such as reinforcing stimuli (sugar reward and electric shock punishment), and allow direct comparison of the cellular requirements for visual and olfactory memories. We found that the same subsets of dopamine neurons drive formation of both sensory memories. Furthermore, distinct yet partially overlapping subsets of mushroom body intrinsic neurons are required for visual and olfactory memories. Thus, our results suggest that distinct sensory memories are processed in a common brain center. Such centralization of related brain functions is an economical design that avoids the repetition of similar circuit motifs.
Animals tend to associate good and bad things with certain visual scenes, smells and other kinds of sensory information. If we get food poisoning after eating a new food, for example, we tend to associate the taste and smell of the new food with feelings of illness. This is an example of a negative ‘associative memory’, and it can persist for months, even when we know that our sickness was not caused by the new food itself but by some foreign body that should not have been in the food. The same is true for positive associative memories.
It is known that many associative memories contain information from more than one of the senses. Our memory of a favorite food, for instance, includes its scent, color and texture, as well as its taste. However, little is known about the ways in which information from the different senses is processed in the brain. Does each sense have its own dedicated memory circuit, or do multiple senses converge to the same memory circuit?
A number of studies have used olfactory (smell) and visual stimuli to study the basic neuroscience that underpins associative memories in fruit flies. The olfactory experiments traditionally use sugar and electric shocks to induce positive and negative associations with various scents. However, the visual experiments use other methods to induce associations with colors. This means that it is difficult to combine and compare the results of olfactory and visual experiments.
Now, Vogt, Schnaitmann et al. have developed a transparent grid that can be used to administer electric shocks in visual experiments. This allows direct comparisons to be made between the neuronal processing of visual associative memories and the neural processing of olfactory associative memories.
Vogt, Schnaitmann et al. showed that both visual and olfactory stimuli are modulated in the same subset of dopamine neurons for positive associative memories. Similarly, another subset of dopamine neurons was found to drive negative memories of both the visual and olfactory stimuli. The work of Vogt, Schnaitmann et al. shows that associative memories are processed by a centralized circuit that receives both visual and olfactory inputs, thus reducing the number of memory circuits needed for such memories.
associative memory; dopamine neurons; visual learning; D. melanogaster
Implicit skill learning is hypothesized to depend on nondeclarative memory that operates independent of the medial temporal lobe (MTL) memory system and instead depends on cortico-striatal circuits between the basal ganglia and cortical areas supporting motor function and planning. Research with the Serial Reaction Time (SRT) task suggests that patients with memory-disorders due to MTL damage exhibit normal implicit sequence learning. However, reports of intact learning rely on observations of no group differences, leading to speculation whether implicit sequence learning is fully intact in these patients. Patients with Parkinson's Disease (PD) often exhibit impaired sequence learning, but this impairment is not universally observed.
Implicit perceptual-motor sequence learning was examined using the Serial Interception Sequence Learning (SISL) task in patients with amnestic Mild Cognitive Impairment (MCI; n=11) and patients with PD (n=15). Sequence learning in SISL is resistant to explicit learning and individually adapted task difficulty controls for baseline performance differences.
Patients with MCI exhibited robust sequence learning, equivalent to healthy older adults (n=20), supporting the hypothesis that the MTL does not contribute to learning in this task. In contrast, the majority of patients with PD exhibited no sequence-specific learning in spite of matched overall task performance. Two patients with PD exhibited performance indicative of an explicit compensatory strategy suggesting that impaired implicit learning may lead to greater reliance on explicit memory in some individuals.
The differences in learning between patient groups provides strong evidence in favor of implicit sequence learning depending solely on intact basal ganglia function with no contribution from the MTL memory system.
implicit memory; Parkinson's disease; mild cognitive impairment; sequence learning; skill learning
Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. However, most algorithms for reward-based learning are only applicable if the dimensionality of the state-space is sufficiently small or its structure is sufficiently simple. Therefore, the question arises how the problem of learning on high-dimensional data is solved in the brain. In this article, we propose a biologically plausible generic two-stage learning system that can directly be applied to raw high-dimensional input streams. The system is composed of a hierarchical slow feature analysis (SFA) network for preprocessing and a simple neural network on top that is trained based on rewards. We demonstrate by computer simulations that this generic architecture is able to learn quite demanding reinforcement learning tasks on high-dimensional visual input streams in a time that is comparable to the time needed when an explicit highly informative low-dimensional state-space representation is given instead of the high-dimensional visual input. The learning speed of the proposed architecture in a task similar to the Morris water maze task is comparable to that found in experimental studies with rats. This study thus supports the hypothesis that slowness learning is one important unsupervised learning principle utilized in the brain to form efficient state representations for behavioral learning.
Humans and animals are able to learn complex behaviors based on a massive stream of sensory information from different modalities. Early animal studies have identified learning mechanisms that are based on reward and punishment such that animals tend to avoid actions that lead to punishment whereas rewarded actions are reinforced. It is an open question how sensory information is processed by the brain in order to learn and perform rewarding behaviors. In this article, we propose a learning system that combines the autonomous extraction of important information from the sensory input with reward-based learning. The extraction of salient information is learned by exploiting the temporal continuity of real-world stimuli. A subsequent neural circuit then learns rewarding behaviors based on this representation of the sensory input. We demonstrate in two control tasks that this system is capable of learning complex behaviors on raw visual input.
Temporal experience of odor gradients is important in spatial orientation of animals. The fruit fly Drosophila melanogaster exhibits robust odor-guided behaviors in an odor gradient field. In order to investigate how early olfactory circuits process temporal variation of olfactory stimuli, we subjected flies to precisely defined odor concentration waveforms and examined spike patterns of olfactory sensory neurons (OSNs) and projection neurons (PNs). We found a significant temporal transformation between OSN and PN spike patterns, manifested by the PN output strongly signaling the OSN spike rate and its rate of change. A simple two-dimensional model admitting the OSN spike rate and its rate of change as inputs closely predicted the PN output. When cascaded with the rate-of-change encoding by OSNs, PNs primarily signal the acceleration and the rate of change of dynamic odor stimuli to higher brain centers, thereby enabling animals to reliably respond to the onsets of odor concentrations.
Fruit flies are attracted to the smell of rotting fruit, and use it to guide them to nearby food sources. However, this task is made more challenging by the fact that the distribution of scent or odor molecules in the air is constantly changing. Fruit flies therefore need to cope with, and exploit, this variation if they are to use odors as cues.
Odor molecules bind to receptors on the surface of nerve cells called olfactory sensory neurons, and trigger nerve impulses that travel along these cells. The olfactory sensory neurons are connected to other cells called projection neurons that in turn relay information to the higher centers of the brain. While many studies have investigated how fruit flies can distinguish between different odors, less is known about how animals can use variation in the strength of an odor to guide them towards its source.
Kim et al. have now addressed this question by devising a method for delivering precise quantities of odors in controlled patterns to fruit flies, and then measuring the responses of olfactory sensory neurons and projection neurons. These experiments revealed that olfactory sensory neurons—which are found mainly in the flies' antennae—responded most strongly whenever an odor changed rapidly in strength, and showed relatively little response to constant odors. An independent study by Schulze, Gomez-Marin et al. found that olfactory sensory neurons in fruit fly larvae also respond in a similar way.
Kim et al. also found that the response of the projection neurons depended on both the rate of nerve impulses in the olfactory sensory neurons and on how quickly this rate was changing. But, unlike the olfactory sensory neurons, projection neurons showed their strongest responses immediately after an odor first appeared.
Thus, in contrast to organisms such as bacteria and worms, which are highly sensitive to the local concentration gradients of odors, fruit flies instead appear to be more responsive to the sudden appearance of an odor in their environment. Kim et al. suggest that this difference may reflect the fact that for ground-based organisms, local gradients are generally reliable predictors of the location of an odor source. However, for flying insects, continually changing air currents mean that predictable local gradients are less common. Therefore, the ability to detect a hint of an odor before the wind changes is a more useful skill.
olfactory sensory neurons; projection neurons; temporal processing; acceleration encoding; two-dimensional linear-nonlinear model; antennal lobes; D. melanogaster
Animals face highly complex and dynamic olfactory stimuli in their natural environments, which require fast and reliable olfactory processing. Parallel processing is a common principle of sensory systems supporting this task, for example in visual and auditory systems, but its role in olfaction remained unclear. Studies in the honeybee focused on a dual olfactory pathway. Two sets of projection neurons connect glomeruli in two antennal-lobe hemilobes via lateral and medial tracts in opposite sequence with the mushroom bodies and lateral horn. Comparative studies suggest that this dual-tract circuit represents a unique adaptation in Hymenoptera. Imaging studies indicate that glomeruli in both hemilobes receive redundant sensory input. Recent simultaneous multi-unit recordings from projection neurons of both tracts revealed widely overlapping response profiles strongly indicating parallel olfactory processing. Whereas lateral-tract neurons respond fast with broad (generalistic) profiles, medial-tract neurons are odorant specific and respond slower. In analogy to “what-” and “where” subsystems in visual pathways, this suggests two parallel olfactory subsystems providing “what-” (quality) and “when” (temporal) information. Temporal response properties may support across-tract coincidence coding in higher centers. Parallel olfactory processing likely enhances perception of complex odorant mixtures to decode the diverse and dynamic olfactory world of a social insect.
Electronic supplementary material
The online version of this article (doi:10.1007/s00359-013-0821-y) contains supplementary material, which is available to authorized users.
Antennal lobe; Glomeruli; Projection neurons; Mushroom bodies; Multi-unit recording
Perceptual learning, even when it exhibits significant specificity to basic stimulus features such as retinal location or spatial frequency, may cause discrimination performance to improve either through enhancement of early sensory representations or through selective re-weighting of connections from the sensory representations to specific responses, or both. For most experiments in the literature (Ahissar & Hochstein, 1996, Fahle & Morgan, 1996, Wilson, 1986), the two forms of plasticity make similar predictions (Dosher & Lu, 2009, Petrov, Dosher & Lu, 2005). The strongest test of the two hypotheses must use training and transfer tasks that rely on the same sensory representation with different task-dependent decision structures. If training changes sensory representations, transfer (or interference) must occur since the (changed) sensory representations are common. If instead training re-weights a separate set of task connections to decision, then performance in the two tasks may still be independent. Here, we performed a co-learning analysis of two perceptual learning tasks based on identical input stimuli, following a very interesting study of Fahle and Morgan (1996) who used nearly identical input stimuli (a three dot pattern) in training bisection and vernier tasks. Two important modifications were made: (1) identical input stimuli were used in the two tasks, and (2) subject practiced both tasks in multiple alternating blocks (800 trials/block). Two groups of subjects with counter-balanced order of training participated in the experiments. We found significant and independent learning of the two tasks. The pattern of result is consistent with the reweighting hypothesis of perceptual learning.
Perceptual learning; Venier; Bisection; Representation Enhancement; Selective Reweigting
Neurons communicate primarily with spikes, but most theories of neural computation are based on firing rates. Yet, many experimental observations suggest that the temporal coordination of spikes plays a role in sensory processing. Among potential spike-based codes, synchrony appears as a good candidate because neural firing and plasticity are sensitive to fine input correlations. However, it is unclear what role synchrony may play in neural computation, and what functional advantage it may provide. With a theoretical approach, I show that the computational interest of neural synchrony appears when neurons have heterogeneous properties. In this context, the relationship between stimuli and neural synchrony is captured by the concept of synchrony receptive field, the set of stimuli which induce synchronous responses in a group of neurons. In a heterogeneous neural population, it appears that synchrony patterns represent structure or sensory invariants in stimuli, which can then be detected by postsynaptic neurons. The required neural circuitry can spontaneously emerge with spike-timing-dependent plasticity. Using examples in different sensory modalities, I show that this allows simple neural circuits to extract relevant information from realistic sensory stimuli, for example to identify a fluctuating odor in the presence of distractors. This theory of synchrony-based computation shows that relative spike timing may indeed have computational relevance, and suggests new types of neural network models for sensory processing with appealing computational properties.
How does the brain compute? Traditional theories of neural computation describe the operating function of neurons in terms of average firing rates, with the timing of spikes bearing little information. However, numerous studies have shown that spike timing can convey information and that neurons are highly sensitive to synchrony in their inputs. Here I propose a simple spike-based computational framework, based on the idea that stimulus-induced synchrony can be used to extract sensory invariants (for example, the location of a sound source), which is a difficult task for classical neural networks. It relies on the simple remark that a series of repeated coincidences is in itself an invariant. Many aspects of perception rely on extracting invariant features, such as the spatial location of a time-varying sound, the identity of an odor with fluctuating intensity, the pitch of a musical note. I demonstrate that simple synchrony-based neuron models can extract these useful features, by using spiking models in several sensory modalities.
Sensory inputs are remarkably organized along all sensory pathways. While sensory representations are known to undergo plasticity at the higher levels of sensory pathways following peripheral lesions or sensory experience, less is known about the functional plasticity of peripheral inputs induced by learning. We addressed this question in the adult mouse olfactory system by combining odor discrimination studies with functional imaging of sensory input activity in awake mice. Here we show that associative learning, but not passive odor exposure, potentiates the strength of sensory inputs up to several weeks after the end of training. We conclude that experience-dependent plasticity can occur in the periphery of adult mouse olfactory system, which should improve odor detection and contribute towards accurate and fast odor discriminations.
The mammalian brain is not static, but instead retains a significant degree of plasticity throughout an animal’s life. It is this plasticity that enables adults to learn new things, adjust to new environments and, to some degree, regain functions they have lost as a result of brain damage.
However, information about the environment must first be detected and encoded by the senses. Odors, for example, activate specific receptors in the nose, and these in turn project to structures called glomeruli in a region of the brain known as the olfactory bulb. Each odor activates a unique combination of glomeruli, and the information contained within this ‘odor fingerprint’ is relayed via olfactory bulb neurons to the olfactory cortex.
Now, Abraham et al. have revealed that the earliest stages of odor processing also show plasticity in adult animals. Two groups of mice were exposed to the same two odors: however, the first group was trained to discriminate between the odors to obtain a reward, whereas the second group was passively exposed to them. When both groups of mice were subsequently re-exposed to the odors, the trained group activated more glomeruli, more strongly, than a control group that had never encountered the odors before. By contrast, the responses of mice in the passively exposed group did not differ from those of a control group.
Given that the response of glomeruli correlates with the ability of mice to discriminate between odors, these results suggest that trained animals would now be able to discriminate between the odors more easily than other mice. In other words, sensory plasticity ensures that stimuli that have been associatively learned with or without reward will be easier to detect should they be encountered again in the future.
sensory perception; imaging; behavior; mouse