One key problem in computational neuroscience and neural engineering is the identification and modeling of functional connectivity in the brain using spike train data. To reduce model complexity, alleviate overfitting, and thus facilitate model interpretation, sparse representation and estimation of functional connectivity is needed. Sparsities include global sparsity, which captures the sparse connectivities between neurons, and local sparsity, which reflects the active temporal ranges of the input-output dynamical interactions. In this paper, we formulate a generalized functional additive model (GFAM) and develop the associated penalized likelihood estimation methods for such a modeling problem. A GFAM consists of a set of basis functions convolving the input signals, and a link function generating the firing probability of the output neuron from the summation of the convolutions weighted by the sought model coefficients. Model sparsities are achieved by using various penalized likelihood estimations and basis functions. Specifically, we introduce two variations of the GFAM using a global basis (e.g., Laguerre basis) and group LASSO estimation, and a local basis (e.g., B-spline basis) and group bridge estimation, respectively. We further develop an optimization method based on quadratic approximation of the likelihood function for the estimation of these models. Simulation and experimental results show that both group-LASSO-Laguerre and group-bridge-B-spline can capture faithfully the global sparsities, while the latter can replicate accurately and simultaneously both global and local sparsities. The sparse models outperform the full models estimated with the standard maximum likelihood method in out-of-sample predictions.
functional connectivity; generalized linear model; sparsity; penalized likelihood; basis function; spike trains; temporal coding
Coordinating the movements of different body parts is a challenging process for the central nervous system because of several problems. Four of these main difficulties are: first, moving one part can move others; second, the parts can have different dynamics; third, some parts can have different motor goals; and fourth, some parts may be perturbed by outside forces. Here, we propose a novel approach for the control of linked systems with feedback loops for each part. The proximal parts have separate goals, but critically the most distal part has only the common goal. We apply this new control policy to eye-head coordination in two-dimensions, specifically head-unrestrained gaze saccades. Paradoxically, the hierarchical structure has controllers for the gaze and the head, but not for the eye (the most distal part). Our simulations demonstrate that the proposed control structure reproduces much of the published empirical data about gaze movements, e.g., it compensates for perturbations, accurately reaches goals for gaze and head from arbitrary initial positions, simulates the nine relationships of the head-unrestrained main sequence, and reproduces observations from lesion and single-unit recording experiments. We conclude by showing how our model can be easily extended to control structures with more linked segments, such as the control of coordinated eye on head on trunk movements.
Gaze saccades; Eye; Head; Feedback control; Superior colliculus; VOR suppression
Despite the central position of CA3 pyramidal cells in the hippocampal circuit, the experimental investigation of their synaptic properties has been limited. Recent slice experiments from adult rats characterized AMPA and NMDA receptor unitary synaptic responses in CA3b pyramidal cells. Here, excitatory synaptic activation is modeled to infer biophysical parameters, aid analysis interpretation, explore mechanisms, and formulate predictions by contrasting simulated somatic recordings with experimental data. Reconstructed CA3b pyramidal cells from the public repository NeuroMorpho.Org were used to allow for cell-specific morphological variation. For each cell, synaptic responses were simulated for perforant pathway and associational/commissural synapses. Means and variability for peak amplitude, time-to-peak, and half-height width in these responses were compared with equivalent statistics from experimental recordings. Synaptic responses mediated by AMPA receptors are best fit with properties typical of previously characterized glutamatergic receptors where perforant path synapses have conductances twice that of associational/commissural synapses (0.9 vs. 0.5 nS) and more rapid peak times (1.0 vs. 3.3 ms). Reanalysis of passive-cell experimental traces using the model shows no evidence of a CA1-like increase of associational/commissural AMPA receptor conductance with increasing distance from the soma. Synaptic responses mediated by NMDA receptors are best fit with rapid kinetics, suggestive of NR2A subunits as expected in mature animals. Predictions were made for passive-cell current clamp recordings, combined AMPA and NMDA receptor responses, and local dendritic depolarization in response to unitary stimulations. Models of synaptic responses in active cells suggest altered axial resistivity and the presence of synaptically activated potassium channels in spines.
AMPA receptor; NMDA receptor; Hippocampus
We derive a formula that relates the spike-triggered covariance (STC) to the phase resetting curve (PRC) of a neural oscillator. We use this to show how changes in the shape of the PRC alter the sensitivity of the neuron to different stimulus features, which are the eigenvectors of the STC. We compute the PRC and STC for some biophysical models. We compare the STCs and their spectral properties for a two-parameter family of PRCs. Surprisingly, the skew of the PRC has a larger effect on the spectrum and shape of the STC than does the bimodality of the PRC (which plays a large role in synchronization properties). Finally, we relate the STC directly to the spike-triggered average and apply this theory to an olfactory bulb mitral cell recording.
Spike-triggered covariance; neural oscillator; phase resetting curve; perturbation; adaptation
In this paper, we develop a dynamical point process model for how complex sounds are represented by neural spiking in auditory nerve fibers. Although many models have been proposed, our point process model is the first to capture elements of spontaneous rate, refractory effects, frequency selectivity, phase locking at low frequencies, and short-term adaptation, all within a compact parametric approach. Using a generalized linear model for the point process conditional intensity, driven by extrinsic covariates, previous spiking, and an input-dependent charging/discharging capacitor model, our approach robustly captures the aforementioned features on datasets taken at the auditory nerve of chinchilla in response to speech inputs. We confirm the goodness of fit of our approach using the Time-Rescaling Theorem for point processes.
Cochlea; Auditory nerve; Spiking model; Statistical model; Point process; Conditional intensity; Time rescaling theorem
Neural tissue injuries render voltage-gated Na+ channels (Nav) leaky, thereby altering excitability, disrupting propagation and causing neuropathic pain related ectopic activity. In both recombinant systems and native excitable membranes, membrane damage causes the kinetically-coupled activation and inactivation processes of Nav channels to undergo hyperpolarizing shifts. This damage-intensity dependent change, called coupled left-shift (CLS), yields a persistent or “subthreshold” Nav window conductance. Nodes of Ranvier simulations involving various degrees of mild CLS showed that, as the system’s channel/pump fluxes attempt to re-establish ion homeostasis, the CLS elicits hyperexcitability, subthreshold oscillations and neuropathic type action potential (AP) bursts. CLS-induced intermittent propagation failure was studied in simulations of stimulated axons, but pump contributions were ignored, leaving open an important question: does mild-injury (small CLS values, pumps functioning well) render propagation-competent but still quiescent axons vulnerable to further impairments as the system attempts to cope with its normal excitatory inputs? We probe this incipient diffuse axonal injury scenario using a 10-node myelinated axon model. Fully restabilized nodes with mild damage can, we show, become ectopic signal generators (“ectopic nodes”) because incoming APs stress Na+/K+ gradients, thereby altering spike thresholds. Comparable changes could contribute to acquired sodium channelopathies as diverse as epileptic phenomena and to the neuropathic amplification of normally benign sensory inputs. Input spike patterns, we found, propagate with good fidelity through an ectopically firing site only when their frequencies exceed the ectopic frequency. This “propagation window” is a robust phenomenon, occurring despite Gaussian noise, large jitter and the presence of several consecutive ectopic nodes.
Electronic supplementary material
The online version of this article (doi:10.1007/s10827-014-0521-9) contains supplementary material, which is available to authorized users.
Ectopicity onset; Phase locking; Neuropathic pain; Coupled left-shift (CLS); Nav1.6 acquired channelopathies
Spike threshold filters incoming inputs and thus gates activity flow
through neuronal networks. Threshold is variable, and in many types of neurons
there is a relationship between the threshold voltage and the rate of rise of
the membrane potential (dVm/dt) leading to the spike. In primary sensory cortex
this relationship enhances the sensitivity of neurons to a particular stimulus
feature. While Na+ channel inactivation may contribute to this
relationship, recent evidence indicates that K+ currents located in
the spike initiation zone are crucial. Here we used a simple Hodgkin-Huxley
biophysical model to systematically investigate the role of K+ and
Na+ current parameters (activation voltages and kinetics) in
regulating spike threshold as a function of dVm/dt. Threshold was determined
empirically and not estimated from the shape of the Vm prior to a spike. This
allowed us to investigate intrinsic currents and values of gating variables at
the precise voltage threshold. We found that Na+ inactivation is
sufficient to produce the relationship provided it occurs at hyperpolarized
voltages combined with slow kinetics. Alternatively, hyperpolarization of the
K+ current activation voltage, even in the absence of
Na+ inactivation, is also sufficient to produce the relationship.
This hyperpolarized shift of K+ activation allows an outward current
prior to spike initiation to antagonize the Na+ inward current such
that it becomes self-sustaining at a more depolarized voltage. Our simulations
demonstrate parameter constraints on Na+ inactivation and the
biophysical mechanism by which an outward current regulates spike threshold as a
function of dVm/dt.
Spike threshold; Hodgkin-Huxley model; Potassium current; Sodium channel inactivation; dVm/dt
One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.
Integrate-and-fire; Lateral geniculate nucleus; Neural noise; Orientation selectivity; Poisson processes; Primary visual cortex
Anaesthetic agents are known to affect extra-synaptic GABAergic receptors, which induce tonic inhibitory currents. Since these receptors are very sensitive to small concentrations of agents, they are supposed to play an important role in the underlying neural mechanism of general anaesthesia. Moreover anaesthetic agents modulate the encephalographic activity (EEG) of subjects and hence show an effect on neural populations. To understand better the tonic inhibition effect in single neurons on neural populations and hence how it affects the EEG, the work considers single neurons and neural populations in a steady-state and studies numerically and analytically the modulation of their firing rate and nonlinear gain with respect to different levels of tonic inhibition. We consider populations of both type-I (Leaky Integrate-and-Fire model) and type-II (Morris-Lecar model) neurons. To bridge the single neuron description to the population description analytically, a recently proposed statistical approach is employed which allows to derive new analytical expressions for the population firing rate for type-I neurons. In addition, the work shows the derivation of a novel transfer function for type-I neurons as considered in neural mass models and studies briefly the interaction of synaptic and extra-synaptic inhibition. We reveal a strong subtractive and divisive effect of tonic inhibition in type-I neurons, i.e. a shift of the firing rate to higher excitation levels accompanied by a change of the nonlinear gain. Tonic inhibition shortens the excitation window of type-II neurons and their populations while maintaining the nonlinear gain. The gained results are interpreted in the context of recent experimental findings under propofol-induced anaesthesia.
General anaesthesia; Firing rate; Spiking neurons; Neural mass model
We use Hamilton-Jacobi-Bellman methods to find minimum-time and energy-optimal control strategies to terminate seizure-like bursting behavior in a conductance-based neural model. Averaging is used to eliminate fast variables from the model, and a target set is defined through bifurcation analysis of the slow variables of the model. This method is illustrated for a single neuron model and for a network model to illustrate its efficacy in terminating bursting once it begins. This work represents a numerical proof-of-concept that a new class of control strategies can be employed to mitigate bursting, and could ultimately be adapted to treat medically intractible epilepsy in patient-specific models.
Optimal control; Neural bursting; Epilepsy
The antennal lobe (AL) is the primary structure within the locust’s brain that receives information from olfactory receptor neurons (ORNs) within the antennae. Different odors activate distinct subsets of ORNs, implying that neuronal signals at the level of the antennae encode odors combinatorially. Within the AL, however, different odors produce signals with long-lasting dynamic transients carried by overlapping neural ensembles, suggesting a more complex coding scheme. In this work we use a large-scale point neuron model of the locust AL to investigate this shift in stimulus encoding and potential consequences for odor discrimination. Consistent with experiment, our model produces stimulus-sensitive, dynamically evolving populations of active AL neurons. Our model relies critically on the persistence time-scale associated with ORN input to the AL, sparse connectivity among projection neurons, and a synaptic slow inhibitory mechanism. Collectively, these architectural features can generate network odor representations of considerably higher dimension than would be generated by a direct feed-forward representation of stimulus space.
Linear discriminability; Principal component analysis
We use optimal control theory to design a methodology to find locally optimal stimuli for desynchronization of a model of neurons with extracellular stimulation. This methodology yields stimuli which lead to positive Lyapunov exponents, and hence desynchronizes a neural population. We analyze this methodology in the presence of interneuron coupling to make predictions about the strength of stimulation required to overcome synchronizing effects of coupling. This methodology suggests a powerful alternative to pulsatile stimuli for deep brain stimulation as it uses less energy than pulsatile stimuli, and could eliminate the time consuming tuning process.
Parkinson’s disease; Lyapunov exponent; Optimal control theory
Theta (4–12 Hz) and gamma (30–80 Hz) rhythms are considered important for cortical and hippocampal function. Although several neuron types are implicated in rhythmogenesis, the exact cellular mechanisms remain unknown. Subthreshold electric fields provide a flexible, area-specific tool to modulate neural activity and directly test functional hypotheses. Here we present experimental and computational evidence of the interplay among hippocampal synaptic circuitry, neuronal morphology, external electric fields, and network activity. Electrophysiological data are used to constrain and validate an anatomically and biophysically realistic model of area CA1 containing pyramidal cells and two interneuron types: dendritic- and perisomatic-targeting. We report two lines of results: addressing the network structure capable of generating theta-modulated gamma rhythms, and demonstrating electric field effects on those rhythms. First, theta-modulated gamma rhythms require specific inhibitory connectivity. In one configuration, GABAergic axo-dendritic feedback on pyramidal cells is only effective in proximal but not distal layers. An alternative configuration requires two distinct perisomatic interneuron classes, one exclusively receiving excitatory contacts, the other additionally targeted by inhibition. These observations suggest novel roles for particular classes of oriens and basket cells. The second major finding is that subthreshold electric fields robustly alter the balance between different rhythms. Independent of network configuration, positive electric fields decrease, while negative fields increase the theta/gamma ratio. Moreover, electric fields differentially affect average theta frequency depending on specific synaptic connectivity. These results support the testable prediction that subthreshold electric fields can alter hippocampal rhythms, suggesting new approaches to explore their cognitive functions and underlying circuitry.
Pyramidal; Interneuron; theta-rhythm; gamma-rhythm
Learning to categorise sensory inputs by generalising from a few examples whose category is precisely known is a crucial step for the brain to produce appropriate behavioural responses. At the neuronal level, this may be performed by adaptation of synaptic weights under the influence of a training signal, in order to group spiking patterns impinging on the neuron. Here we describe a framework that allows spiking neurons to perform such “supervised learning”, using principles similar to the Support Vector Machine, a well-established and robust classifier. Using a hinge-loss error function, we show that requesting a margin similar to that of the SVM improves performance on linearly non-separable problems. Moreover, we show that using pools of neurons to discriminate categories can also increase the performance by sharing the load among neurons.
Electronic supplementary material
The online version of this article (doi:10.1007/s10827-014-0505-9) contains supplementary material, which is available to authorized users.
Keywords; Supervised learning; Spiking neurons; Tempotron; Support vector machine
In vivo recordings in rat somatosensory cortex suggest that excitatory and inhibitory inputs are often correlated during spontaneous and sensory-evoked activity. Using a computational approach, we study how the interplay of input correlations and timing observed in experiments controls the spiking probability of single neurons. Several correlation-based mechanisms are identified, which can effectively switch a neuron on and off. In addition, we investigate the transfer of input correlation to output correlation in pairs of neurons, at the spike train and the membrane potential levels, by considering spike-driving and non-spike-driving inputs separately. In particular, we propose a plausible explanation for the in vivo finding that membrane potentials in neighboring neurons are correlated, but the spike-triggered averages of membrane potentials preceding a spike are not: Neighboring neurons possibly receive an ongoing bombardment of correlated subthreshold background inputs, and occasionally uncorrelated spike-driving inputs.
Input correlation; Rate modulation; Correlation transfer; Temporal structure; Barrel cortex
Hippocampal population codes play an important role in representation of spatial environment and spatial navigation. Uncovering the internal representation of hippocampal population codes will help understand neural mechanisms of the hippocampus. For instance, uncovering the patterns represented by rat hippocampus (CA1) pyramidal cells during periods of either navigation or sleep has been an active research topic over the past decades. However, previous approaches to analyze or decode firing patterns of population neurons all assume the knowledge of the place fields, which are estimated from training data a priori. The question still remains unclear how can we extract information from population neuronal responses either without a priori knowledge or in the presence of finite sampling constraint. Finding the answer to this question would leverage our ability to examine the population neuronal codes under different experimental conditions. Using rat hippocampus as a model system, we attempt to uncover the hidden “spatial topology” represented by the hippocampal population codes. We develop a hidden Markov model (HMM) and a variational Bayesian (VB) inference algorithm to achieve this computational goal, and we apply the analysis to extensive simulation and experimental data. Our empirical results show promising direction for discovering structural patterns of ensemble spike activity during periods of active navigation. This study would also provide useful insights for future exploratory data analysis of population neuronal codes during periods of sleep.
Hidden Markov model; Expectation-Maximization; Variational Bayesian inference; Place cells; Population codes; Spatial topology; Force-based algorithm
Receptive field properties of neurons in A1 can rapidly adapt their shapes during task performance in accord with specific task demands and salient sensory cues (Fritz et al., Hearing Research, 206:159–176, 2005a, Nature Neuroscience, 6: 1216–1223, 2003). Such modulatory changes selectively enhance overall cortical responsiveness to target (foreground) sounds and thus increase the likelihood of detection against the background of reference sounds. In this study, we develop a mathematical model to describe how enhancing discrimination between two arbitrary classes of sounds can lead to the observed receptive field changes in a variety of spectral and temporal discrimination tasks. Cortical receptive fields are modeled as filters that change their spectro-temporal tuning properties so as to respond best to the discriminatory acoustic features between foreground and background stimuli. We also illustrate how biologically plausible constraints on the spectro-temporal tuning of the receptive fields can be used to optimize the plasticity. Results of the model simulations are compared to published data from a variety of experimental paradigms.
Auditory; Cortical; Rapid task-related plasticity; Computational model
Natural sensory inputs, such as speech and music, are often rhythmic. Recent studies have consistently demonstrated that these rhythmic stimuli cause the phase of oscillatory, i.e. rhythmic, neural activity, recorded as local field potential (LFP), electroencephalography (EEG) or magnetoencephalography (MEG), to synchronize with the stimulus. This phase synchronization, when not accompanied by any increase of response power, has been hypothesized to be the result of phase resetting of ongoing, spontaneous, neural oscillations measurable by LFP, EEG, or MEG. In this article, however, we argue that this same phenomenon can be easily explained without any phase resetting, and where the stimulus-synchronized activity is generated independently of background neural oscillations. It is demonstrated with a simple (but general) stochastic model that, purely due to statistical properties, phase synchronization, as measured by ‘inter-trial phase coherence’, is much more sensitive to stimulus-synchronized neural activity than is power. These results question the usefulness of analyzing the power and phase of stimulus-synchronized activity as separate and complementary measures; particularly in the case of attempting to demonstrate whether stimulus-phase-locked neural activity is generated by phase resetting of ongoing neural oscillations.
Phase resetting; Neural oscillations; Phase coherence; Entrainment
In order to properly capture spike-frequency adaptation with a simplified point-neuron model, we study approximations of Hodgkin-Huxley (HH) models including slow currents by exponential integrate-and-fire (EIF) models that incorporate the same types of currents. We optimize the parameters of the EIF models under the external drive consisting of AMPA-type conductance pulses using the current-voltage curves and the van Rossum metric to best capture the subthreshold membrane potential, firing rate, and jump size of the slow current at the neuron’s spike times. Our numerical simulations demonstrate that, in addition to these quantities, the approximate EIF-type models faithfully reproduce bifurcation properties of the HH neurons with slow currents, which include spike-frequency adaptation, phase-response curves, critical exponents at the transition between a finite and infinite number of spikes with increasing constant external drive, and bifurcation diagrams of interspike intervals in time-periodically forced models. Dynamics of networks of HH neurons with slow currents can also be approximated by corresponding EIF-type networks, with the approximation being at least statistically accurate over a broad range of Poisson rates of the external drive. For the form of external drive resembling realistic, AMPA-like synaptic conductance response to incoming action potentials, the EIF model affords great savings of computation time as compared with the corresponding HH-type model. Our work shows that the EIF model with additional slow currents is well suited for use in large-scale, point-neuron models in which spike-frequency adaptation is important.
Adaptation current; Integrate-and-fire networks; Bifurcations; Numerical methods; Efficient neuronal models
Although synaptic output is known to be modulated by changes in presynaptic calcium channels, additional pathways for calcium entry into the presynaptic terminal, such as non-selective channels, could contribute to modulation of short term synaptic dynamics. We address this issue using computational modeling. The neuropeptide proctolin modulates the inhibitory synapse from the lateral pyloric (LP) to the pyloric dilator (PD) neuron, two slow-wave bursting neurons in the pyloric network of the crab Cancer borealis. Proctolin enhances the strength of this synapse and also changes its dynamics. Whereas in control saline the synapse shows depression independent of the amplitude of the presynaptic LP signal, in proctolin, with high-amplitude presynaptic LP stimulation the synapse remains depressing while low-amplitude stimulation causes facilitation. We use simple calcium-dependent release models to explore two alternative mechanisms underlying these modulatory effects. In the first model, proctolin directly targets calcium channels by changing their activation kinetics which results in gradual accumulation of calcium with low-amplitude presynaptic stimulation, leading to facilitation. The second model uses the fact that proctolin is known to activate a non-specific cation current IMI. In this model, we assume that the MI channels have some permeability to calcium, modeled to be a result of slow conformation change after binding calcium. This generates a gradual increase in calcium influx into the presynaptic terminals through the modulatory channel similar to that described in the first model. Each of these models can explain the modulation of the synapse by proctolin but with different consequences for network activity.
The mechanoelectrical transducer (MET) is a crucial component of mammalian auditory system. The gating mechanism of the MET channel remains a puzzling issue, though there are many speculations, due to the lack of essential molecular building blocks. To understand the working principle of mammalian MET, we propose a molecular level prototype which constitutes a charged blocker, a realistic ion channel and its surrounding membrane. To validate the proposed prototype, we make use of a well-established ion channel theory, the Poisson-Nernst-Planck equations, for three-dimensional (3D) numerical simulations. A wide variety of model parameters, including bulk ion concentration, applied external voltage, blocker charge and blocker displacement, are explored to understand the basic function of the proposed MET prototype. We show that our prototype prediction of channel open probability in response to blocker relative displacement is in a remarkable accordance with experimental observation of rat cochlea outer hair cells. Our results appear to suggest that tip links which connect hair bundles gate MET channels.
Mechanoelectrical transducer; Gating mechanism; Poisson-Nernst-Planck model; Ion channel
Perceptual multistability is a phenomenon in which alternate interpretations of a fixed stimulus are perceived intermittently. Although correlates between activity in specific cortical areas and perception have been found, the complex patterns of activity and the underlying mechanisms that gate multistable perception are little understood. Here, we present a neural field competition model in which competing states are represented in a continuous feature space. Bifurcation analysis is used to describe the different types of complex spatio-temporal dynamics produced by the model in terms of several parameters and for different inputs. The dynamics of the model was then compared to human perception investigated psychophysically during long presentations of an ambiguous, multistable motion pattern known as the barberpole illusion. In order to do this, the model is operated in a parameter range where known physiological response properties are reproduced whilst also working close to bifurcation. The model accounts for characteristic behaviour from the psychophysical experiments in terms of the type of switching observed and changes in the rate of switching with respect to contrast. In this way, the modelling study sheds light on the underlying mechanisms that drive perceptual switching in different contrast regimes. The general approach presented is applicable to a broad range of perceptual competition problems in which spatial interactions play a role.
Multistability; Competition; Perception; Neural fields; Bifurcation; Motion
Lateral inhibition of cells surrounding an excited area is a key property of sensory systems, sharpening the preferential tuning of individual cells in the presence of closely related input signals. In the olfactory pathway, a dendrodendritic synaptic microcircuit between mitral and granule cells in the olfactory bulb has been proposed to mediate this type of interaction through granule cell inhibition of surrounding mitral cells. However, it is becoming evident that odor inputs result in broad activation of the olfactory bulb with interactions that go beyond neighboring cells. Using a realistic modeling approach we show how backpropagating action potentials in the long lateral dendrites of mitral cells, together with granule cell actions on mitral cells within narrow columns forming glomerular units, can provide a mechanism to activate strong local inhibition between arbitrarily distant mitral cells. The simulations predict a new role for the dendrodendritic synapses in the multicolumnar organization of the granule cells. This new paradigm gives insight into the functional significance of the patterns of connectivity revealed by recent viral tracing studies. Together they suggest a functional wiring of the olfactory bulb that could greatly expand the computational roles of the mitral–granule cell network.
Olfactory processing; Modeling; Mitral cells; Granule cells
The lack of a deeper understanding of how olfactory sensory neurons (OSNs) encode odors has hindered the progress in understanding the olfactory signal processing in higher brain centers. Here we employ methods of system identification to investigate the encoding of time-varying odor stimuli and their representation for further processing in the spike domain by Drosophila OSNs. In order to apply system identification techniques, we built a novel low-turbulence odor delivery system that allowed us to deliver airborne stimuli in a precise and reproducible fashion. The system provides a 1% tolerance in stimulus reproducibility and an exact control of odor concentration and concentration gradient on a millisecond time scale. Using this novel setup, we recorded and analyzed the in-vivo response of OSNs to a wide range of time-varying odor waveforms. We report for the first time that across trials the response of OR59b OSNs is very precise and reproducible. Further, we empirically show that the response of an OSN depends not only on the concentration, but also on the rate of change of the odor concentration. Moreover, we demonstrate that a two-dimensional (2D) Encoding Manifold in a concentration-concentration gradient space provides a quantitative description of the neuron’s response. We then use the white noise system identification methodology to construct one-dimensional (1D) and two-dimensional (2D) Linear-Nonlinear-Poisson (LNP) cascade models of the sensory neuron for a fixed mean odor concentration and fixed contrast. We show that in terms of predicting the intensity rate of the spike train, the 2D LNP model performs on par with the 1D LNP model, with a root mean-square error (RMSE) increase of about 5 to 10%. Surprisingly, we find that for a fixed contrast of the white noise odor waveforms, the nonlinear block of each of the two models changes with the mean input concentration. The shape of the nonlinearities of both the 1D and the 2D LNP model appears to be, for a fixed mean of the odor waveform, independent of the stimulus contrast. This suggests that white noise system identification of Or59b OSNs only depends on the first moment of the odor concentration. Finally, by comparing the 2D Encoding Manifold and the 2D LNP model, we demonstrate that the OSN identification results depend on the particular type of the employed test odor waveforms. This suggests an adaptive neural encoding model for Or59b OSNs that changes its nonlinearity in response to the odor concentration waveforms.
System identification; Olfactory sensory neurons; White noise analysis; I/O modeling