Home | About | Journals | Submit | Contact Us | Français |

**|**Front Neural Circuits**|**v.11; 2017**|**PMC5374876

Formats

Article sections

- Abstract
- 1. Introduction
- 2. Materials and methods
- 3. Results
- 4. Discussion
- Author contributions
- Funding
- References

Authors

Related links

Front Neural Circuits. 2017; 11: 21.

Published online 2017 March 31. doi: 10.3389/fncir.2017.00021

PMCID: PMC5374876

Ján Antolík^{*}

Unité de Neurosciences, Information et Complexité, Centre National de la Recherche Scientifique UPR 3293, Gif-sur-Yvette, France

Edited by: Claude Desplan, New York University, USA

Reviewed by: Evan H. Feinberg, University of California, San Francisco, USA; Yaroslav I. Molkov, Georgia State University, USA

Received 2017 January 11; Accepted 2017 March 13.

Copyright © 2017 Antolík.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

Competitive interactions are believed to underlie many types of cortical processing, ranging from memory formation, attention and development of cortical functional organization (e.g., development of orientation maps in primary visual cortex). In the latter case, the competitive interactions happen along the cortical surface, with local populations of neurons reinforcing each other, while competing with those displaced more distally. This specific configuration of lateral interactions is however in stark contrast with the known properties of the anatomical substrate, i.e., excitatory connections (mediating reinforcement) having longer reach than inhibitory ones (mediating competition). No satisfactory biologically plausible resolution of this conflict between anatomical measures, and assumed cortical function has been proposed. Recently a specific pattern of delays between different types of neurons in cat cortex has been discovered, where direct mono-synaptic excitation has approximately the same delay, as the combined delays of the disynaptic inhibitory interactions between excitatory neurons (i.e., the sum of delays from excitatory to inhibitory and from inhibitory to excitatory neurons). Here we show that this specific pattern of delays represents a biologically plausible explanation for how short-range inhibition can support competitive interactions that underlie the development of orientation maps in primary visual cortex. We demonstrate this statement analytically under simplifying conditions, and subsequently show using network simulations that development of orientation maps is preserved when long-range excitation, direct inhibitory to inhibitory interactions, and moderate inequality in the delays between excitatory and inhibitory pathways is added.

Competition between populations of neurons has been proposed as one of the canonical computations of cortical networks, and has been hypothesized to underly a range of brain functions including working memory (Amit and Brunel, 1995; Durstewitz et al., 2000), orientation tuning (Ben-Yishai et al., 1995; Somers et al., 1995), and functional map development (von der Malsburg, 1973; Miikkulainen et al., 2005; Antolík and Bednar, 2011). In developmental models of functional cortical organization the competition occurs between populations of neurons spatially offset along the cortical surface, whereby local populations mutually reinforce each other via excitatory connections (short-range excitation), while long-range inhibition facilitates competition between the local populations and stabilizes the activity in the network. Such so called Mexican-hat arrangement of recurrent interactions (Figure (Figure1A),1A), however, is in stark contrast to the know anatomical arrangement of cortical circuitry: the excitatory neurons (especially in superficial layers) tend to form long-range arborizations spanning multiple columns, while the axons of a majority of inhibitory neurons are confined to an area only several hundreds of micrometers in diameter (Budd and Kisvárday, 2001; Buzás et al., 2006) (Figure (Figure1B1B).

As pointed out by Muir et al. in their recent study (Muir and Cook, 2014), to solve this apparent conflict, previous topologically organized models of cortical competition have either relied on the anatomically unsupported Mexican-hat profile of lateral interactions (von der Malsburg, 1973; Miikkulainen et al., 2005), or relied on other biologically unrealistic properties, such as selective targeting of inhibitory neurons by long range excitatory connections (Law, 2009; Rutishauser et al., 2012) or instantaneous synaptic transmission coupled with omission of recurrent inhibition (Kang et al., 2003; Grabska-Barwinska and von der Malsburg, 2008; Levy and Reyes, 2011).

A common hypothesis about how to reconcile Mexican-hat lateral interactions with anatomical reality is that the range of the effective inhibitory influence of an excitatory neuron onto other excitatory neurons (via the disynaptic pathway from excitatory to inhibitory to excitatory neurons—the disynaptic inhibition) will correspond to the cumulative reach of the direct excitatory and the direct inhibitory connections. The effective inhibitory interactions between excitatory neurons will thus have longer range than the direct excitatory connections, supporting the use of Mexican-hat lateral interactions in population models (i.e., models without explicit separation of excitatory and inhibitory populations). Even though very intuitive, this explanation has never been explicitly demonstrated, and it omits the fact that under the reasonable null-hypothesis of equal transmission delays for all connections, the recurrent disynaptic inhibition will lag the direct recurrent excitation. As it turns out this is crucial, as recent model analysis by Muir et al. (Muir and Cook, 2014) shows that, under the assumption of uniform transmission delays, the presence of competition across the cortical surface is predicted well by the anatomy of direct excitatory and inhibitory coupling and that multi-synaptic network effects are negligible, effectively rejecting the disynaptic explanation behind Maxican-hat interaction. In conclusion, currently no satisfactory explanation of how topological functional organization develops in cortical networks that is consistent with the present anatomical findings exists.

As we will show in this study, the nature of the transmission delays between different neural type (excitatory, inhibitory) combinations holds the key to resolving this long standing open question. A recent study by Ohana et al. (2012) revealed a specific pattern of transmission delays between different neural type combinations. Specifically they found that the excitatory to excitatory connections are slow, while the excitatory to inhibitory connections are fast. In this study we explore the possibility that this specific transmission delay pattern is the missing link that can explain how short range inhibition can lead to effective cortical competition. We employ computational models to show that the fast excitatory-to-inhibitory-to-excitatory pathway allows for the disynaptic inhibition to generate the effective Mexican-hat like lateral interactions (Figure (Figure1C),1C), thus for the first time explaining competition in topologically organized cortical networks with no biologically implausible assumptions.

To demonstrate the proposed implementation of competitive interactions on a specific feature of the V1 processing, in this study we focus on models of development of functional cortical organization. Functional properties of neurons in primary visual cortex, such orientation, color and frequency preference is not randomly distributed along the cortical surface but rather form smooth topological maps (Swindale, 1996; Goodhill, 2007), whereby nearby neurons prefer similar stimulus features. Such smooth topological mapping of function on cortical surface is ubiquitous throughout the cortex (Huth et al., 2016), found in multiple species, and implicated in a number of functional properties of V1, and of cortex in general. Here we restrict our attention to orientation preference, as it is the most well explored example of competition driven functional cortical organization development, but these results generalize to development of other cortical topological properties, as well as potentially other competition based computations.

Here we describe the two models and their variants used in this study and finish with a description of a measure for assessing the extent to which model orientation maps resemble their biological counterparts. Since this study heavily leans on methodology developed in our previous studies, we offer here a short description and refer readers to the original articles for the details.

In Section 3.1 through Section 3.3 (model 1, 2 and 3; see Figure Figure3)3) we use the GCAL model (Stevens et al., 2013) which is the most advanced variant of the LISSOM (Laterally Interconnected Synergetically Self-Organizing Map) algorithm introduced by Miikkulainen et al. (2005), which itself is based on earlier Self-Organizing Map models (Kohonen, 1982). Several mechanistic explanations of how orientation maps can develop in primary visual cortex have been proposed, but most, including the LISSOM family of models, involve two key ingredients: (1) stimulus driven Hebbian learning on the thalamo-cortical synapses, that ensures the formation of afferent connectivity pattern inducing Gabor like RFs (and consequently orientation, frequency and phase preference) to V1 neural units; and (2) a Mexican-hat-like effective lateral interactions within the cortical population of neural units, that induce co-activation among proximate units while competition between more distal units.

To understand how these two mechanisms lead to development of orientation maps, consider first the initial state of the model with isotropic afferent connectivity. When stimulus is presented to the model for the first time, the activity of the cortical population will after few simulation steps settle into a pattern that can be best described as random placement of “blobs,” whereby nearby neurons tend to be either co-active or silent. At this point, the placement of the local activity centers (the “blobs”) is essentially random, determined by the interplay of the competitive influence of the lateral interactions with whatever source of variability present in the model (e.g., random noise in the initial connections or content of the stimulus). The Hebbian learning will ensure, that all neurons within the active local populations will adjust they afferent weights slightly toward the stimulus pattern appearing within their RFs. They will thus in subsequent iterations be slightly more activated by similar patterns.

This iterative process of stimulus presentation, activity pattern formation and Hebbian adjustment of the thalamo-cortical weights will keep repeating. However note, that in the subsequent iterations, neurons whose afferent weights will be more similar with the currently presented stimulus falling within their RF will tend to be more active then those whose RF at that point differs from the presented stimulus. Importantly even small differences of the initial activations will be magnified by the lateral competitive interactions thus overall driving nearby neurons to over time develop sharp selectivity for similar stimulus features while more distal neurons will be driven to develop selectivity to other stimulus features. Which features will be mapped onto cortical surface will depend on the exact statistics of the stimuli shown during training. It turns out that if the stimuli are natural images, or other artificial stimuli with strong oriented components, smooth representation of orientation preference across the cortical surface (the orientation maps) will develop. In this study we will focus on this specific feature (orientation), as it is the most well studied one, but it is important to emphasize that given the right stimulation conditions, multiplexed representation of other features, such as ocular dominance, disparity, spatial frequency and others can develop (Miikkulainen et al., 2005).

In the reminder of this section we will describe one specific implementation of model with overall dynamics following the broad description outlined in the previous paragraphs. For clarity and consistency the following model description closely follows the methodology sections in our previous work (Antolík and Bednar, 2011; Stevens et al., 2013). The architecture of the 3 GCAL variants presented in this paper is depicted in Figures Figures2,2, 3A–C. The models are implemented in the Topographica simulator, freely available at www.topographica.org. Each of the GCAL models consists of a set of sheets of model neural units, corresponding to: (1) the photo-receptors, (2) the combined effects of the retinal ganglion cells (RGC) and ON and OFF LGN cells. Furthermore each model has one V1 sheet of units with direct excitatory and inhibitory interactions (model 1; see Figure Figure3),3), or two V1 sheets, one corresponding to excitatory and one to inhibitory neurons (model 2 and 3; see Figure Figure3).3). The model sheet is a 2D array of computational elements (called units or, loosely, neurons), with activation and plasticity equations as described below and referenced by a coordinate system we will refer to as sheet coordinates, where the center of the sheet is designated (0.0,0.0). The number of units simulated in each sheet is determined by the density of units per unit length in both sheet dimensions. All cortical sheets have nominal dimensions 1.0 × 1.0 in sheet coordinates. The sizes of the RGC/LGN (1.5 × 1.5) and photo-receptor (3.5 × 3.5) sheets were chosen to ensure that each unit in the receiving sheet has a complete set of connections, thus minimizing edge effects in the RGC/LGN and V1 sheets. The density of units per 1.0 × 1.0 area is 96 × 96 for the photo-receptor and RGC/LGN ON and OFF sheets, and 96 × 96 for both cortical sheets.

As a simplification, GCAL ignores the detailed temporal properties of the sub-cortical neural responses and of signal propagation along the various types of connections. Instead, the model ON/OFF units have a constant, sustained output, and all connections have a constant delay, independent of the physical length of that connection. The simulator operates in discrete time steps. Retinal input changes every 16 time steps (and during this period is kept constant), and therefore afferent inputs to the V1 sheet(s) are effectively updated every 16 steps. This process is a discrete simulation of an otherwise continuous series of changes in membrane potential due to incoming spikes and consequent generation of spikes. One such training iteration (16 steps) in the model represents one visual fixation i.e., an iteration consists of a constant retinal activation, followed by processing at the ON/OFF and cortical levels.

The activation value ψ_{i} of unit i in the photo-receptor sheet (P) is given by the gray-scale value in the chosen image at that point. In this study we use two-dimensional elongated Gaussian patterns, whose center coordinates and orientation are sampled from a uniform distribution that covers the area of the photo-receptor sheet and the full range of orientations. Every 16 time steps 2 Gaussian patterns are superimposed to form the visual input.

The ON/OFF units are called RGC/LGN units because they represent the entire pathway between the retinal photo-receptors and V1, including the retinal ganglion cells, LGN cells, and the connection pathways. The activation level for a unit at position *j* in an RGC/LGN sheet at time *t* is defined as:

$${\eta}_{j}(t)=f\left(\frac{{A}_{j}(t)}{c+{\gamma}_{L}{\displaystyle {\sum}_{k}{A}_{k}(t){l}_{kj}}}\right)$$

(1)

where

$${A}_{j}(t)={\gamma}_{F}{\displaystyle \sum _{i\in {F}_{j}}{\Psi}_{i}}(t){\omega}_{ij}$$

(2)

The activation function *f* is a half-wave rectifying function that ensures positive activation values, constant γ_{F} = 1.5 defines the overall strength of the afferent connections from the retina, constant γ_{L} = 0.6 defines the strength of the lateral connections within the RGC/LGN sheet, constant *c* = 0.11 defines the slope of the gain, Ψ_{i} is the activation of unit *i* taken from the set of photo-receptors from which RGC/LGN unit *j* receives input (its connection field *F*_{j}), ω_{ij} is the connection weight from unit *i* in the retina to unit *j* in the RGC/LGN, and *l*_{kj} is the lateral connection weight from RGC/LGN unit *k* to RGC/LGN unit *j*. Weights from the photo-receptors to units in the ON and OFF channels are set to fixed strengths with a difference-of-Gaussians kernel (σ_{center} = 0.07385, σ_{surround} = 0.2954, in sheet dimensions), with ON connection fields having a positive center and a negative surround and vice versa for OFF. The lateral RGC/LGN weights are 2D Gaussians with kernel size σ = 0.25. The center of the afferent and lateral connection field of each ON/OFF unit is mapped to the location in the photo-receptor and LGN sheet corresponding to the location of that unit in sheet coordinates, making all these projections retinotopic.

Units in the cortical sheets each receive three types of projections represented as matrices of weights: afferent excitatory (*p* = *A*), lateral excitatory (*p* = *E*) and lateral inhibitory (*p* = *I*). The contribution *C*_{jp} to the activation of unit *j* in a cortical sheet from each projection *p* at time *t* is given by:

$${C}_{jp}(t+\delta t)={\displaystyle \sum _{i\in {F}_{jp}}{\Psi}_{i}}(t){\omega}_{pij}$$

(3)

where Ψ_{i}(*t*) is the activation of unit *i* taken from the set of units in the input sheet of projection *p* from which unit *j* receives input (its connection field *F*_{jp}), and ω_{pij} is the connection weight from unit *i* in the input sheet of projection *p* to unit *j* in the output sheet of projection *p*. All connection field weights are initialized with uniform random noise multiplied by a 2D Gaussian profile, cut off at the distance specified below. Contributions from each projection are weighted and summed and passed via a non-linearity *f* to calculate the activation of a cortical neuron i at time *t*:

$${\Psi}_{i}(t)=f({\displaystyle \sum _{p}{\gamma}_{p}}{C}_{ip}(t))$$

(4)

where γ_{p} is a constant determining the sign (negative for inhibitory) and strength of projection *p*. The transfer function *f* is a half-wave rectifying function that ensures positive activation values. It has a variable threshold point (θ) dependent on the average activity of the unit as described in the next subsection, but in all cases the gain is fixed at unity. The projection strength scaling factor of the afferent projection γ_{A} was set to 1.5 based on Stevens et al. (2013) while the values of the lateral excitatory and inhibitory scaling factors, γ_{E} and γ_{I} respectively, were varied (see Figure Figure4)4) to find a balance between excitation and inhibition, and between afferent and lateral influences, to provide robust formation of activity bubbles that facilitates the formation of smooth maps.

Once all 16 settling steps are complete, the settled cortical activation pattern is deemed to be the response of cortical sheets to the presented pattern. At this point we use the response of cortical neurons to update their threshold point (θ) (using the adaptation process described below) and to update the afferent weights via Hebbian learning. Cortical activity is then reset to zero, and a new pattern is presented. Note that both adaptation and learning could instead be performed at every settling step, but this would greatly decrease computational efficiency.

The threshold θ of all cortical excitatory units is updated at the end of each settling phase based on the following equations:

$${\theta}_{t\text{\hspace{0.05em}}+\text{\hspace{0.05em}}1}={\theta}_{t}+\xi (\tilde{\Psi}(t)-\mu )$$

(5)

where ξ = 0.01 is the time constant of the threshold adaptation, μ = 0.24 is a constant defining the target average activity, and $\stackrel{~}{\text{\Psi}}$ is the recent average activity of the unit:

$$\tilde{\Psi}(t)=(1-\chi )\Psi (t)+\chi \tilde{\Psi}(t-1)$$

(6)

where Ψ(*t*) is the output of the unit at time t and χ = 0.991 is a time constant controlling the decay of the influence of the past activities. The effect of this scaling mechanism is to bring the average activity of each cortical unit closer to the specified target. If the activity in a V1 unit moves away from the target during training, the threshold for activation is thus automatically raised or lowered to bring it closer to the target. Note that an alternative rule with only a single smoothing parameter (rather than ξ and χ) could be formulated, but the rule as presented here makes it simple for the modeler to set a desired target activity.

The initial connection field weights are isotropic 2D Gaussians for the lateral excitatory projection and uniformly random within a Gaussian envelope for afferent and lateral inhibitory projections. Specifically, a neuron located at (i, j) will have the following weights in projection *p*:

$${\omega}_{ijp}=\frac{1}{{Z}_{p}}uexp(-\frac{{x}^{2}+{y}^{2}}{2{\sigma}_{p}^{2}})$$

(7)

where (x, y) is the sheet-coordinate location of the presynaptic neuron, *u* = 1 for the lateral excitatory projection (*p* = E), and u is a scalar value drawn from a uniform random distribution for the afferent and lateral inhibitory projections (*p* = A, I), σ_{p} determines the width of the Gaussian in sheet coordinates (σ_{A} = 0.27, σ_{E} = 0.035, ${\sigma}_{I}=0.035\sqrt{2}$), and *Z*_{p} is a constant normalizing term that ensures that the total of all weights ω_{ijp} to neuron *j* in projection *p* is 1.0. Weights for each projection are only defined within a specific maximum circular radius *r*_{p} (*r*_{A} = 0.27, *r*_{E} = 0.15, ${r}_{I}=0.15\sqrt{2}$).

In the model, as images are presented to the photo-receptors, the cortical afferent connection weights ω_{i,j,A} from the ON/OFF sheets are adjusted once per iteration (after cortical settling is completed) using a simple Hebbian learning rule. This rule results in connections that reflect correlations between the presynaptic ON/OFF unit activities and the postsynaptic cortical response. Hebbian connection weight adjustment at each iteration is dependent on the presynaptic activity, the postsynaptic response, and the Hebbian learning rate:

$${\omega}_{ijA}(t)=\frac{{\omega}_{ij}(t-1)+{\beta}_{p}{\Psi}_{j}(t){\Psi}_{i}(t)}{{\displaystyle {\sum}_{p\in \{ON,OFF\}}{\displaystyle {\sum}_{k}\left({\omega}_{kj,p}(t-1)+{\beta}_{p}{\Psi}_{j}(t){\Psi}_{k}(t)\right)}}}$$

(8)

where β_{p} is the Hebbian learning rate for the connection fields in the two afferent projections from RGC/LGN *p* {ON, OFF}. i.e., the afferent weights from RGC/LGN are normalized jointly. Learning rate parameters are specified as a fixed value ι_{p} = 0.2 for each projection, and then the unit-specific values used in the equation above are calculated as ${\beta}_{p}=\frac{{\iota}_{p}}{{\upsilon}_{p}}$, where υ_{p} is the number of connections per connection field in projection *p*. The base parameters described here correspond to the first model variant (Figure (Figure4).4). Any modifications of these base parameters in the other two GCAL model variants 2 and 3 (Figures (Figures5,5, ,6)6) examined in Sections 3.2 and 3.3 are then reported in the respective sections.

The architecture of this model (model variant 4; Figures Figures2,2, ,3D)3D) is identical to the GCAL models (model variants 1 through 3; Figures 3A–C) with the exception of the Equations (3) and (4), which were replaced with an equation taking into account the membrane time constant:

$${\tau}_{z}\frac{\delta {\Psi}_{i}}{\delta t}=-{\Psi}_{i}+{\displaystyle \sum _{p}{\gamma}_{p}}{\displaystyle \sum _{j\in {F}_{ip}}f}({\Psi}_{j}(t-{\nu}_{p})){\omega}_{pji}$$

(9)

where τ_{z} (*z* {*E, I*}) is the membrane time constant of excitatory and inhibitory neurons (τ_{E} = 2 ms, τ_{I} = 0.5 ms), and ν_{p} is the transmission delay for projection *p* (ν_{EE} = 1.4 ms, ν_{EI} = 0.5 ms,ν_{IE} was varied, see Section 3.4). As a consequence the evolution of the network dynamics has to be simulated at higher resolution. We chose the update step to be 0.1 ms, and we let the activity in the model settle for 150 ms, thus resulting in 1,500 settling steps as opposed to the 16 of the GCAL model. The sub-cortical parametrization of the model is identical to the model variants 1–3 (Figures 3A–C), but because the addition of membrane time constants substantially changes the cortical dynamics the cortical parameters had to be re-adjusted. Specifically the strength of the thalamo-cortical connections was set to γ_{A} = 0.5, the strength of the lateral excitatory-to-excitatory connections was set to γ_{EE} = 3.5, the strength of the lateral excitatory-to-inhibitory connections was set to γ_{EI} = 1.0, and the strength of the lateral inhibitory-to-excitatory connections was set to γ_{IE} = 2.94.

Model orientation maps are calculated based on the vector average method (Miikkulainen et al., 2005). We first determine the preferred frequency of neurons across the map. Due to the simplified stereotypical stimulus used in this study (elongated Gaussian inputs) the spatial frequency preference of all neurons lies in a very narrow band, and we thus use the mean preferred spatial frequency across all cortical neurons as the value for the spatial frequency parameter across all subsequent analysis. Next, sinusoidal grating inputs that cover the full range of remaining parameter values (combinations of all orientations and phases) are presented, and for each orientation, the peak response of the neuron is recorded. The orientation preference is calculated by constructing a vector for each orientation θ (between 0 and 180°), with the peak response as the length and θ as its orientation. These vectors are summed and the preferred orientation is calculated as half of the orientation of the summed vector. The selectivity is given by the magnitude of the summed vector.

In order to assess whether the proposed model develops orientation maps that match the structure of those found in real animals we need to utilize an automatic metric that tells how close the maps are to animal data. To this end we will utilize a map-quality metric that we have recently developed (Stevens et al., 2013) based on the empirical observation that pinwheel count in biological orientation maps scales linearly with hypercolumn size across many different species (Kaschube et al., 2010). Specifically the pinwheel density per hypercolumn area (Λ^{2}) converges to π, when averaged across a sufficiently large cortical surface. For a detailed description of the procedure for calculating this metric we refer the reader to our previous work (Stevens et al., 2013), but briefly, its calculation involves three steps. First, the locations of the pinwheels in the orientation map are determined as the intersections of the zero contours of the real and imaginary components in the polar representation of the maps, thus yielding the total pinwheel count in the map. Second the hypercolumn size is determined as the peak in the isotropic ring-like Fourier transform of the orientation maps. Third, using these two numbers we can derive the pinwheel density, but to transform it to a useful metric between unity (high-quality map) and zero (low quality map) we pass it through a normalized Gamma distribution. We have shown that this metric reliably distinguishes low and high quality maps (Stevens et al., 2013) and is a valid measure for assessing how well the model orientation maps match animal data.

In this article we will proceed through multiple gradually more complex models of orientation development, addressing several of the major issues with modeling this phenomenon, eventually demonstrating that the experimentally identified fast inhibitory loop is a satisfactory explanation for how short-range inhibition can support the development of cortical functional organization. We will use two different computational abstractions to explore the questions at hand. In the first part of the study we will use a computational model that is derived from the LISSOM family of models (Miikkulainen et al., 2005). This choice has three advantages. It allows for a very straightforward explanation of why a fast inhibitory loop enables short-range inhibition to induce competition. It makes our explanation directly comparable to the extensive set of published LISSOM family models, and thus demonstrates that the solution proposed here generalizes to a range of other functional properties. And finally the LISSOM abstraction enables very fast simulations, thus allowing us to perform a parameter search analysis that would otherwise be computationally prohibitive. However, as we will explain further in Section 3.4, some simplifications made by the LISSOM abstraction, specifically the instantaneous translation of the neuronal input into its activity, will leave certain questions unanswered. These will be addressed in Section 3.4 using more detailed rate model framework.

Ohana et al. (2012) have shown that transmission delays between different types of cortical neurons are not uniform, specifically they found that on average the transmission delays between excitatory neurons are ~1.4 ms, from excitatory to inhibitory neurons are ~0.5 ms, and from inhibitory to excitatory neurons are ~0.98 ms. The sample size for connections between inhibitory cells in the study was not large enough to be quantitatively reliable. The key observation here is that the combined di-synaptic delay from excitatory to inhibitory and inhibitory to excitatory cells is approximately as long as the mono-synaptic delay of the excitatory to excitatory connections. Drawing from this, the core insight of this study is that under such a pattern of delays the effective inhibitory interactions are well approximated by the convolution of the excitatory and inhibitory connection kernels, as we will show below. This gives the effective inhibition longer range, consequently fulfilling the essential requirement for cortical competition to occur. For didactic purposes, and to simplify analytical treatment, let us first explore one specific set of conditions under which the above statement holds exactly, we will explore the situation when these conditions are relaxed in subsequent sections:

- The sum of the excitatory to inhibitory and inhibitory to excitatory delays is exactly the same as the excitatory to excitatory delay.
- There is no inhibitory to inhibitory interaction.
- The synaptic inputs into the excitatory and inhibitory neurons are instantaneously translated into their rate response via a positive rectified transfer function.
- The connection kernels of all neurons are Gaussian kernels with spatial constant σ
_{e}for excitatory neurons and σ_{i}for inhibitory neurons. - We will assume only local connectivity (disregarding long-range excitatory connections), and assume that both excitatory and inhibitory neurons have the same extent, thus σ
_{e}= σ_{i}.

These conditions lead to the following set of equations governing the cortico-cortical interaction.

$$\begin{array}{l}{R}_{e}(x,t)=[{\displaystyle \sum _{y}{N}_{{\sigma}_{e}}}(\Vert y-x\Vert ){R}_{e}(y,t-{\theta}_{EE})\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{-{N}_{{\sigma}_{i}}(\Vert y-x\Vert ){R}_{i}(y,t-{\theta}_{IE})+{I}_{aff}(x,t)]}^{+}\\ {R}_{i}(x,t)={\left[{\displaystyle \sum _{y}{N}_{{\sigma}_{e}}}(\Vert y-x\Vert ){R}_{e}(y,t-{\theta}_{EI})\right]}^{+}\end{array}$$

(10)

where *R*_{p}(*x, t*) is the response of a neuron of type *p* located at position *x* at time *t*, *I*_{aff}(*x, t*) is the afferent input to a neuron at position *x* at time *t*, *N*_{σ} is a normal distribution of variance σ, corresponding to the lateral connection kernel, and θ_{ab} is a delay on connections from neural type *a* to neural type *b*. When we expand for *R*_{i} we obtain:

$$\begin{array}{l}{R}_{e}(x,t)=[{\displaystyle \sum _{y}{N}_{{\sigma}_{e}}}(\Vert y-x\Vert ){R}_{e}(y,t-{\theta}_{EE})-{\displaystyle \sum _{y}{N}_{{\sigma}_{i}}}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{(\Vert y-x\Vert )[{\displaystyle \sum _{z}{N}_{{\sigma}_{e}}}(\Vert z-y\Vert ){R}_{e}(z,t-{\theta}_{EI}-{\theta}_{IE})]}^{+}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{+\text{\hspace{0.17em}}{I}_{aff}(x,t)]}^{+}\end{array}$$

(11)

Because θ_{EI} + θ_{IE} = θ_{EE} (assumption 1) and because the inhibitory neurons receive only excitatory connections (assumption 2) we can further simplify the above as:

$$\begin{array}{l}{R}_{e}(x,t)=[{\displaystyle \sum _{y}{N}_{{\sigma}_{e}}}(\Vert y-x\Vert ){R}_{e}(y,t-{\theta}_{EE})\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}-{\displaystyle \sum _{z}{\displaystyle \sum _{y}{N}_{{\sigma}_{i}}}}(\Vert y-x\Vert ){N}_{{\sigma}_{e}}(\Vert z-y\Vert ){R}_{e}(z,t-{\theta}_{EE})\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}+\text{\hspace{0.17em}}{I}_{aff}(x,t){]}^{+}\end{array}$$

(12)

and thus due to the symmetry of the normal distribution:

$$\begin{array}{l}{R}_{e}(x,t)=[{\displaystyle \sum _{y}{N}_{{\sigma}_{e}}}(\Vert y-x\Vert ){R}_{e}(y,t-{\theta}_{EE})\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}-{\displaystyle \sum _{z}{N}_{{\sigma}_{i}}}\ast {N}_{{\sigma}_{e}}(\Vert z-x\Vert ){R}_{e}(z,t-{\theta}_{EE})\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}+\text{\hspace{0.17em}}{I}_{aff}(x,t){]}^{+}\end{array}$$

(13)

Because convolution of two Normal distributions of variance σ_{i} and σ_{e} is a normal distribution with variance $\sqrt{{\sigma}_{i}^{2}+{\sigma}_{e}^{2}}$ and because σ_{i} = σ_{e} we can simplify to:

$$\begin{array}{l}{R}_{e}(x,t)=[{\displaystyle \sum _{y}{N}_{{\sigma}_{e}}}(\Vert y-x\Vert ){R}_{e}(y,t-{\theta}_{EE})\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}-{\displaystyle \sum _{z}{N}_{\sqrt{2}{\sigma}_{e}}}(\Vert z-x\Vert ){R}_{e}(z,t-{\theta}_{EE})\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}+\text{\hspace{0.17em}}{I}_{aff}(x,t){]}^{+}\end{array}$$

(14)

This shows that under these specific assumptions the effective inhibitory interactions between excitatory neurons are $\sqrt{2}$ longer than the excitatory ones and thus follow the Mexican-hat like profile required for the lateral cortical competition underlying functional map development. Even though this essential condition of cortical competition is fulfilled in this model configuration, important constraints on the extent of the effective lateral interactions (relative to lateral excitation) remain. It is thus still unclear whether development of high-quality orientation maps as observed experimentally is supported under these conditions. Conveniently, the lateral interaction in the LISSOM family of models is governed by the same equations as 10. In the following we will use the GCAL model (Stevens et al., 2013), the latest and most robust addition to the LISSOM family of models to demonstrate that the above specific configuration of effective lateral excitation and inhibition permits the development of high-quality orientation maps. GCAL is the only model of stimulus dependent functional development which achieves emergence of biologically realistic orientation maps in terms of pinwheel density, a signature that is a useful objective measure of orientation map quality. We will utilize this map quality measure to reliably detect model configurations which permit the successful emergence of orientation maps. To facilitate a comparison we will use exactly the same model configuration as in Stevens et al. (2013) (see Section 2), except for three modifications necessitated by the analysis above (see Figure Figure3A3A):

- We will change the spread of lateral inhibition to be $\sqrt{2}$ longer than lateral excitatory spread, such that the resulting lateral interactions conform with Equation (14).
- The change in point 1 will result in a change in the balance of overall excitation and inhibition in the model which is critical for successful functional development in the model. We will thus modify the strength of the lateral inhibition to compensate for changes due to modification 1.
- Analogously to 2, the changes in 1 change the overall balance between feed-forward and lateral contributions to model a cortical neuron's activity, which we will compensate by changing the overall strength of the lateral interactions.

In order to find a working combination of parameters in points 2 and 3, and also to show that the model is robust to a certain level of changes in these two parameters, we have performed a parameter search across these two parameters and evaluated the quality of the orientation map (see Section 2) for each parameter combination. As Figure Figure44 shows, under a range of values of both parameters the model develops high-quality orientation maps indistinguishable from their experimental counterparts, thus concluding our first step toward showing that a fast excitatory-to-inhibitory-to-excitatory loop can explain how short-range inhibition can induce cortical competition and consequently the development of topological organization of functional properties.

Furthermore, note that the GCAL model used in Figure Figure44 only explicitly models excitatory neurons and assumes both direct excitatory and inhibitory interactions between them, thus corresponding to Equation (14). Even though above we have shown that Equation (14) is equivalent to Equation (10) to verify the correctness of our analysis we have run a single simulation of the GCAL model corresponding to the parameter combination with the highest map quality found in Figure Figure4,4, but with an explicitly simulated inhibitory population (Figure (Figure4E).4E). In this model we thus do not model direct inhibitory interactions between excitatory neurons, but instead add excitatory to inhibitory and inhibitory to excitatory connections of the same extent as those of the excitatory to excitatory pathway (see the assumption #5). As we have shown above this model should be mathematically equivalent to the simulation shown in Figure Figure4D.4D. Note, however, that the GCAL simulations represent a discrete approximation in both time and space of Equations (10, 14) and we thus expect small numerical discrepancies. Indeed, orientation maps shown in Figure Figure4E4E are nearly identical to those in the Figure Figure4D,4D, with only barely perceptible numerical differences, confirming the validity of our approach.

In the previous section we have shown that under a set of specific assumptions that facilitate analytical treatment, short-range inhibition can induce effective Mexican-hat like interaction and thus support the development of orientation maps. However, not all assumptions we made were in line with experimental evidence. In this section we will show, now only numerically, that one of these assumptions is not necessary, specifically that the addition of inhibitory to inhibitory connections does not prevent emergence of orientation maps.

To this end, we will use the exact GCAL model configuration that we have found in the previous section to possess the highest quality orientation map (see Figure Figure4).4). We will use the GCAL configuration in which we will explicitly model the inhibitory neurons (see Figure Figure3B,3B, and Equation 10) and consequently also explicitly the excitatory to inhibitory and inhibitory to excitatory connections. Furthermore, we will add inhibitory to inhibitory connections to the model with the same extent as that of inhibitory to excitatory connections (σ_{i} = σ_{e} in Equation 10).

By explicitly modeling the inhibitory neurons in this model we have replaced a single parameter governing the strength of inhibitory lateral interactions in the model from the previous section with three new parameters that set: (1) the strength of excitatory to inhibitory, (2) inhibitory to excitatory, and (3) inhibitory to inhibitory connections (all other parameters remained the same as the in the best parameterization found in the previous section). Note that in principle, there is redundancy in these parameters as the overall strength of the projections from inhibitory neurons onto both excitatory and inhibitory populations is scaled by the excitatory to inhibitory projection strength. Therefore in Figure Figure55 we have systematically varied the strength of the added inhibitory to inhibitory projection expressed relatively to the strength of the inhibitory to inhibitory connections (which was set to 1), while also varying the strength of the excitatory to inhibitory projection. We have investigated the quality of the orientation maps that developed under these different levels of inhibitory to inhibitory interactions. As can be seen, high quality maps can develop under the full range of the inhibitory to inhibitory interaction strengths, depending on the overall excitatory to inhibitory drive. This shows that inclusion of direct inhibitory to inhibitory interactions does not invalidate the results of Section 3.1 (Figure (Figure44).

In model variants 1 and 2 (Figures (Figures4,4, ,5)5) we have only assumed local connectivity by setting both excitatory and inhibitory interactions to have the same spatial extent. However experimental evidence shows that excitatory cells send longer connections compared to inhibitory cells (Budd and Kisvárday, 2001; Buzás et al., 2006). In this section we will explore what happens if we add long-range excitatory connectivity into model variant 1. Buzás et al. (2006) have shown that the lateral connectivity in layer 2/3 can be best described as superimposition of two gabor connectivity likelihoods, one short-range but not orientation specific and one long-range and orientation specific. Here we will assume such dual structure, leading us to add a second excitatory to excitatory and excitatory to inhibitory projection into model variant 1, but with a space constant that is 3 times larger (see Figure Figure3C),3C), in line with Buzás et al. (2006) quantitative findings.

In Figure Figure66 we will examine what strength of the long-range excitatory connections, relative to the short-range excitatory ones (y axis in Figure Figure6A),6A), leads to development of a high-quality orientation map. Adding the long-range excitation changes the spatial configurations over which excitatory interactions win over inhibitory ones, as well as the overall magnitude of the resulting net local excitation. Consequently, the proportion between the magnitude of the net local excitation due to the lateral interactions and the excitation due to afferent inputs is changed, which is a crucial parameter for map development. To compensate for these changes, we also systematically explore the ratio of the overall magnitudes of the lateral and afferent interactions (x axis of Figure Figure6).6). To make the parameter search computationally feasible, we perform the parameter search only in the region of parameters that allow for sufficiently strong long-range excitatory projections. As can be seen in Figures 6D,E, under appropriately strong lateral interactions, substantial long-range excitatory connections still permit the development of high quality orientation maps, demonstrating that the proposed model is consistent with the experimentally identified long-range excitatory connectivity.

In all model variants examined so far we have made the key assumption that the delay on the excitatory to excitatory connections is exactly equal to the sum of excitatory to inhibitory and inhibitory to excitatory delays. This assumption is approximately supported by the experimental evidence (Ohana et al., 2012), but we cannot assume it holds exactly in a real biological substrate. However, we hypothesize, that the small discrepancies between the delay of the mono-synaptic excitatory connections and the cumulative delay of the bi-synaptic inhibitory interactions can be absorbed into the membrane time-constant of the neurons. In this section we will verify this hypothesis by extending the modeling framework used thus far with a finite membrane time-constant (see Section 2) and proceed to determine the magnitude of the discrepancy in the delays between the excitatory and inhibitory interactions that can be managed by the model without impairing the resulting orientation map quality.

We use a model parameterization similar to those determined for model variant 1 (Section 3.1; Figure Figure3A).3A). For simplicity and computational efficiency we omit the inhibitory to inhibitory and long-range excitatory connections that have already been investigated with model variants 2 and 3. The membrane time-constant of excitatory neurons was set to 2 ms while those of inhibitory neurons to 0.5 ms. These faster inhibitory dynamics are necessary to prevent oscillations in the system (Kang et al., 2003). We set the excitatory to excitatory delay to 1.4 ms and excitatory to inhibitory delay to 0.5 ms based on Ohana et al. (2012). In order to understand how closely the cumulative bi-synaptic inhibition delay has to match that of the direct excitatory to excitatory delay, we will vary the delay on the inhibitory to excitatory projection (note that we could achieve the same by varying the excitatory to inhibitory delays, and this choice was arbitrary).

Figure Figure77 shows the resulting orientation maps and associated map quality measures of models with a range of differences between the delays of monosynaptic excitatory and bi-synaptic inhibitory interactions, that are in the figure expressed as the sum of the excitatory to inhibitory and inhibitory to excitatory delays minus the excitatory to excitatory delay (i.e., the figure shows how much slower the inhibitory disynaptic interactions were in comparison with the monosynaptic excitatory ones). As can be seen when the differences between the excitatory delay (1.4 ms) and cumulative inhibitory delay is small (<0.8 ms) high quality orientation maps develop in the model, confirming that sufficiently large discrepancy between the direct excitatory and bi-synaptic inhibitory delays can be accommodated in the model. However, as expected, as the difference between the delays increases the ability of the model to learn a topologically organized representation of orientation preference diminishes. Crucially, if the delays across all the projections were equal (Figure (Figure7),7), as is typically assumed, the model fails to develop orientation maps in line with the analysis by Muir and Cook (2014), thus confirming that the specific delay pattern between neural types identified by Ohana et al. (2012) is key to achieving competitive dynamics in topologically organized neural models.

In this study we have shown how recent findings on dependence of neural transmission delays on the type of pre- and post-synaptic neurons (Ohana et al., 2012) can resolve a long-standing question on how short-range inhibition can support cortical competition and consequently the development of functional cortical topological organization. Under simplifying assumptions, we have analytically shown how disynaptic inhibition that is as fast as mono-synaptic excitation can extend the effective range of inhibitory interactions, in contrast to the recent analytical results showing that in the case of equal synaptic delays on all connections the disynaptic inhibition has negligible effects (Muir and Cook, 2014). We have also shown that these findings are applicable to the problem of functional development in primary visual cortex. We have then proceeded to show using computational methods that the proposed models are robust to the addition of other well established features of cortical anatomy, commonly ignored by similar studies, including the long-range excitatory connections and mutual inhibition among inhibitory neurons. Finally, we have shown that the proposed mechanisms are robust to the variations of the exact delay ratio between the mono-synaptic excitation and di-synaptic inhibition. Overall, this study represents an important advance in our understanding of how orientation map development can be supported by the cortical neural substrate.

Using neuron reconstruction data of the recorded neurons, Ohana et al. (2012) identified the placement of the different pre-post synaptic combinations on the dendrites of the target cell as the likely origin of this transmission delay heterogeneity. They found that the excitatory-to-inhibitory synapses were closer to the soma of the post-synaptic neuron than the excitatory-to-excitatory and inhibitory-to-excitatory synapses. An axonal origin for the transmission delay heterogeneity is unlikely, as Ohana et al. (2012) found that all the pre-post combinations were, on average, equally distant from the pre-synaptic cell body. They further supported the dendritic origin of the observed delay inhomogeneities by showing in a computational model that the observed delay magnitudes can be explained by these anatomical findings. Overall this indicates a mechanism for the generation of the differences in inter-neuron transmission delays that is likely to generalize beyond the primary visual cortex. We thus suggest that the results presented in this study generalize to the development of other functional features and other cortical competition based mechanisms whose origin is outside of primary visual cortex.

In this study we have focused on the development of the global organization of the thalamo-cortical and cortico-cortical connectivity and as a consequence the global organization of V1's most salient functional property—orientation tuning. How do, however, the individual model cortical units correspond to individual biological V1 neurons, e.g., do they have matching tuning properties? The detailed systematic investigation of this question is outside the scope of this study; however, to offer at least a basic view of how the explored models behave at single-cell level, we show in Figure Figure88 representative orientation tuning curves for all model parametrizations for which we have shown a detailed view of their properties (i.e., we have presented their orientation map etc.). Figure Figure88 shows that orientation tuning of individual neurons varies between the models and parametrizations, but all the models with good quality orientation maps also have reasonably realistic orientation tuning of single units. Several of these model parametrizations achieve sharp realistic contrast invariance of the orientation tuning width. At the same time, Figure Figure88 shows that the relationship between the orientation maps and afferent connectivity patterns and the orientation tuning is not straightforward. For example some parametrizations with inferior maps achieve sharper tuning (e.g., Figure Figure8,8, A2) in comparison with model parametrizations exhibiting high-quality orientation maps (e.g., Figure Figure8,8, A3). Ultimately, further, more rigorous, quantification and systematic investigation of the single cell tuning properties and their dependence on the model parametrization will be necessary to fully understand these relationships.

The most related past explanation of how cortical competition can arise under short range inhibition is that of Kang et al. (2003), who have shown that under the assumption of a faster inhibitory time constant (as opposed to excitatory), the effective excitatory and inhibitory interactions will follow the Mexican hat profile and thus support competition along the cortical surface. Indeed, in our final model variant 4 that explicitly considers membrane time-constant presented in Section 3.4, we assume that inhibitory neurons have faster membrane time constants than excitatory ones, as otherwise we observe oscillatory behavior in line with the analytical findings of Kang et al. (2003). Crucially, Kang et al. (2003) assumed instantaneous neural transmission, and when this biologically implausible assumption is rectified by addition of transmission delays that are uniform across the connections between the different pre- and post-synaptic neural types, we find that the competitive dynamics in the neural model break (Figure (Figure7F)7F) in line with the analytical and computational results of Muir and Cook (2014). However, when we replace the transmission delays with the neural-type specific pattern uncovered by Ohana et al. (2012) the competitive dynamics in the model are rescued and we observe development of high quality orientation maps (Section 3.4), in line with the analytical results under simplified conditions (Section 3.1).

The analytical results in this study were obtained only under simplifying assumptions, specifically instantaneous translation of inputs to membrane potential, equal extent of excitatory and inhibitory connections and a lack of inhibitory-to-inhibitory interactions. Even though we have shown computationally that these assumptions are not necessary for achieving the cortical competition and the consequential orientation map development sought in this study, further analytical work then can circumvent these simplifications would undoubtedly provide deeper understanding of the dynamics of the studied neural system and its dependence on the various parameters. This sentiment underlies the parameter explorations presented here, which show that even though the model is robust to changes in the considered parameters, the existence or not of dynamics supporting development of orientation maps can form a complex pattern within the explored parameter spaces. This is particularly the case for the model variant with inhibitory-to-inhibitory interactions, in which (unlike in the other variants) the inhibitory population gains it's own dynamics. Furthermore, the relative computational complexity of the studied models and the extensive set of parameters involved preclude systematic search across the full parameter space, and we have only explored parameters that we empirically found to have the biggest impact. Finally, one simplifying assumption that we have not treated in this study is the lack of direct thalamic input onto inhibitory cells. Since inhibitory cells in cortical layer 4 do receive thalamic input (Binzegger et al., 2004) the inclusion of external input in the inhibitory population needs to be considered in the future.

In this paper we have decided to investigate cortical competitive mechanisms through the prism of orientation map development. The advantage of this approach is that it allows us not only to show that some form of competition is possible, but also that it is of the form that actually supports implementation of specific cortical computations. Given that we show that our model implements effective Mexican hat lateral interactions (Section 3.1) and these have in the past been shown to be sufficient to explain cortical organization of other functional features (i.e., retinotopy, ocular dominance, spatial frequency and color) it is very likely that our results will generalize to these other dimensions of sensory input as well. Cortical competition of other forms has been proposed to underly a broad variety of other cortical operations, including associative memory, noise suppression, decision making, saliency detection and other forms of attentional computations. Even though additional work will be required to determine if the mechanisms proposed here can generalize to these other neural computations, this study offers a promising framework for anatomically plausible mechanistic explanations of these important aspects of brain function.

JA performed the calculus, simulations, and wrote the manuscript.

The research leading to these results has received funding from the European Union H2020 Programme (H2020-Adhoc-2014-20) under grant agreement no.720270 (The Human Brain Project SGA1).

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

I would like to thank James A. Bednar and Andrew Davison for reviewing the original draft and providing valuable comments on the presented research.

- Amit D. J., Brunel N. (1995). Learning internal representations in an attractor neural network with analogue neurons. Network 6, 359–388. 10.1088/0954-898X_6_3_004 [Cross Ref]
- Antolík J., Bednar J. A. (2011). Development of maps of simple and complex cells in the primary visual cortex. Front. Comput. Neurosci. 5:17. 10.3389/fncom.2011.00017 [PMC free article] [PubMed] [Cross Ref]
- Ben-Yishai R., Bar-Or R. L., Sompolinsky H. (1995). Theory of orientation tuning in visual cortex. Proc. Natl. Acad. Sci. U.S.A. 92, 3844–3848. 10.1073/pnas.92.9.3844 [PubMed] [Cross Ref]
- Binzegger T., Douglas R. J., Martin K. a. C. (2004). A quantitative map of the circuit of cat primary visual cortex. J. Neurosci. 24, 8441–8453. 10.1523/JNEUROSCI.1400-04.2004 [PubMed] [Cross Ref]
- Budd J. M., Kisvárday Z. F. (2001). Local lateral connectivity of inhibitory clutch cells in layer 4 of cat visual cortex (area 17). Exp. Brain Res. 140, 245–250. 10.1007/s002210100817 [PubMed] [Cross Ref]
- Buzás P., Kovács K., Ferecskó A. S., Budd J. M. L., Eysel U. T., Kisvárday Z. F. (2006). Model-based analysis of excitatory lateral connections in the visual cortex. J. Comp. Neurol. 499, 861–881. 10.1002/cne.21134 [PubMed] [Cross Ref]
- Durstewitz D., Seamans J. K., Sejnowski T. J. (2000). Neurocomputational models of working memory. Nat. Neurosci. 3, 1184–1191. 10.1038/81460 [PubMed] [Cross Ref]
- Goodhill G. J. (2007). Contributions of theoretical modeling to the understanding of neural map development. Neuron 56, 301–311. 10.1016/j.neuron.2007.09.027 [PubMed] [Cross Ref]
- Grabska-Barwinska A., von der Malsburg C. (2008). Establishment of a scaffold for orientation maps in primary visual cortex of higher mammals. J. Neurosci. 28, 249–257. 10.1523/JNEUROSCI.5514-06.2008 [PubMed] [Cross Ref]
- Huth A. G., Heer W. A. D., Griffiths T. L., Theunissen F. E., Jack L. (2016). Natural speech reveals the semantic maps that tile human cerebral cortex. Nature 532, 453–458. 10.1038/nature17637 [PMC free article] [PubMed] [Cross Ref]
- Kang K., Shelley M., Sompolinsky H. (2003). Mexican hats and pinwheels in visual cortex. Proc. Natl. Acad. Sci. U.S.A. 100, 2848–2853. 10.1073/pnas.0138051100 [PubMed] [Cross Ref]
- Kaschube M., Schnabel M., Löwel S., Coppola D. M., White L. E., Wolf F. (2010). Universality in the evolution of orientatino columns in the visual cortex. Science 330, 1113–1116. 10.1126/science.1194869 [PMC free article] [PubMed] [Cross Ref]
- Kohonen T. (1982). Self-organized formation of topologically correct feature maps. Biol. Cybern. 43, 59–69. 10.1007/BF00337288 [Cross Ref]
- Law J. S. (2009). Modeling the Development of Organization for Orientation Preference in Primary Visual Cortex. Ph.D. thesis, School of Informatics, The University of Edinburgh, Edinburgh.
- Levy R. B., Reyes A. D. (2011). Coexistence of lateral and Co-Tuned inhibitory configurations in cortical networks. PLoS Comput. Biol. 7:e1002161. 10.1371/journal.pcbi.1002161 [PMC free article] [PubMed] [Cross Ref]
- Miikkulainen R., Bednar J. A., Choe Y., Sirosh J. (2005). Computational Maps in the Visual Cortex. Berlin: Springer.
- Muir D. R., Cook M. (2014). Anatomical constraints on lateral competition in columnar cortical architectures. Neural Comput. 1872, 1840–1872. 10.1162/neco_a_00613 [PubMed] [Cross Ref]
- Ohana O., Portner H., Martin K. A. C. (2012). Fast recruitment of recurrent inhibition in the cat visual cortex. PLoS ONE 7:e40601. 10.1371/journal.pone.0040601 [PMC free article] [PubMed] [Cross Ref]
- Rutishauser U., Slotine J.-J., Douglas R. J. (2012). Competition through selective inhibitory synchrony. Neural Comput. 24, 2033–2052. 10.1162/NECO_a_00304 [PubMed] [Cross Ref]
- Somers D. C., Nelson S. B., Sur M. (1995). An emergent model of orientation selectivity in cat visual cortical simple cells. J. Neurosci. 15, 5448–5465. [PubMed]
- Stevens J.-L. R., Law J. S., Antolík J., Bednar J. A. (2013). Mechanisms for stable, robust, and adaptive development of orientation maps in the primary visual cortex. J. Neurosci. 33, 15747–15766. 10.1523/JNEUROSCI.1037-13.2013 [PubMed] [Cross Ref]
- Swindale N. V. (1996). The development of topography in the visual cortex: A review of models. Net. Comp. in Neu. Sys. 7, 161–247 [PubMed]
- von der Malsburg C. (1973). Self-organization of orientation sensitive cells in the striate cortex. Kybernetik 14, 85–100. 10.1007/BF00288907 [PubMed] [Cross Ref]

Articles from Frontiers in Neural Circuits are provided here courtesy of **Frontiers Media SA**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |