Home | About | Journals | Submit | Contact Us | Français |

**|**Scientific Reports**|**PMC5339801

Formats

Article sections

Authors

Related links

Sci Rep. 2017; 7: 44006.

Published online 2017 March 7. doi: 10.1038/srep44006

PMCID: PMC5339801

Laura Wiles,^{1} Shi Gu,^{1,}^{2} Fabio Pasqualetti,^{3} Brandon Parvesse,^{1} David Gabrieli,^{1} Danielle S. Bassett,^{1,}^{4} and David F. Meaney^{a,}^{1,}^{5}

Received 2016 July 25; Accepted 2017 February 2.

Copyright © 2017, The Author(s)

This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/

We examine the role of structural autapses, when a neuron synapses onto itself, in driving network-wide bursting behavior. Using a simple spiking model of neuronal activity, we study how autaptic connections affect activity patterns, and evaluate if controllability significantly affects changes in bursting from autaptic connections. Adding more autaptic connections to excitatory neurons increased the number of spiking events and the number of network-wide bursts. We observed excitatory synapses contributed more to bursting behavior than inhibitory synapses. We evaluated if neurons with high average controllability, predicted to push the network into easily achievable states, affected bursting behavior differently than neurons with high modal controllability, thought to influence the network into difficult to reach states. Results show autaptic connections to excitatory neurons with high average controllability led to higher burst frequencies than adding the same number of self-looping connections to neurons with high modal controllability. The number of autapses required to induce bursting was lowered by adding autapses to high degree excitatory neurons. These results suggest a role of autaptic connections in controlling network-wide bursts in diverse cortical and subcortical regions of mammalian brain. Moreover, they open up new avenues for the study of dynamic neurophysiological correlates of structural controllability.

Network architecture forms a critical driver for complex function in a broad class of systems across biomedical science^{1}^{,2}^{,3}. For the brain, transcribing the architecture of white matter tracts crisscrossing the human cortex^{4}^{,5} have offered inherently new ways to explain the relationship between brain and behavior^{6}^{,7}, and its alteration in neurological disorders and psychiatric disease^{8}^{,9}^{,10}^{,11}. At the microcircuit scale, describing the anatomical connections among neurons in a network has shown that networks do not follow a random organizational pattern within the brain, but instead follow a clustered, distance-dependent connection pattern that provides for self-sustained excitability within a cluster of neurons^{12}^{,13}^{,14}, the coordinated synchronization of activity across clusters^{15}^{,16}, and reveals important synaptic scaling features to organize these neuronal circuits across the phylogenic scale^{17}. In each of these contexts, the organization of connectivity patterns plays a key role in constraining system dynamics and organism function^{18}.

In neural networks, self-looping structures are known as “autapses” may offer energetically effective means for controlling network dynamics towards specific states^{19}. Autapses appear as a synaptic connection from a neuron onto itself. Since their discovery over four decades ago, autapses are now documented in pyramidal neurons within the developing rat neocortex^{20} and the cat visual cortex^{21}, appear more commonly on inhibitory neurons^{22}^{,23}^{,24}, and appear abundantly in fast-spiking interneurons, but not in low-threshold spiking interneurons^{23}. In other cases, autaptic connections can represent only a small number of the thousands of excitatory and inhibitory synaptic connections received by a neuron^{25}, yet their self-stimulating nature can provide a very economical method to affect neuronal activity dynamics. To this end, several studies show that the delays in autaptic inputs affect the bursting behavior and information transfer of individual neurons, offering insights into regulating the activity of these neurons^{26}^{,27}^{,28}^{,29}. However, relatively little is known on the consequences of self-loop connections at the network scale, and how these connections affect the overall dynamics of the neural network. As such, principles explaining the functional network role of autapses in neural circuits remain a mystery.

In this communication, we study self-looping in cortical and hippocampal neuronal networks and examine the impact of these loops on activity dynamics, from network-wide bursting to coordinated firing of neuronal subpopulations. We test the specific situation in which self-loops are placed either randomly throughout the network or at driver nodes predicted to facilitate different control strategies. We show that autaptic connections enhance the network’s excitability, increasing bursting frequency and regularity. For the networks studied, effects of autaptic connections are strongest when these connections are placed on excitatory neurons; when the number of autaptic connections is held constant, bursting frequency is higher when more autapses are placed on fewer neurons rather than when fewer autapses are placed on more neurons. Finally, we observed the greatest increase in network-wide bursting when autapses were located at points in the network that are theoretically predicted to be effective controllers.

To study the relationship between structural connectivity and neuronal network dynamics, we constructed computational neural networks and simulated their activity using Izhikevich integrate and fire model neurons^{30}^{,31}. Preliminary simulations showed that network activity dynamics did not change for networks containing more than 800 neurons. We therefore used a network size of 1000 neurons for all simulations. All simulations were completed using in-house software developed in the MATLAB programming language (MathWorks, Inc.).

In each simulation, we placed 800 excitatory and 200 inhibitory neurons on the surface of a unit sphere using MATLAB’s twister random number generator. We used a uniform distribution to place these neurons at different azimuthal and polar angles and to avoid clustering of the neurons at either pole. The number of outputs for each neuron was generated from a normal distribution with a mean of 93.75 outputs per neuron and a variance of 9.375, resulting in a mean of 75 excitatory inputs and 18.75 inhibitory inputs per neuron. We chose these values to reflect anatomical estimates from empirical data that suggest that – in cortex – approximately 20% of neuron inputs are inhibitory^{32}. We placed these output and input connections within the network in a distance-dependent manner, consistent with prior empirical studies^{33}^{,34}. We defined the strength of each neuron as the sum of inputs onto and outputs from a neuron. For example, an inhibitory neuron with 100 excitatory inputs (each strength 3) and 200 inhibitory outputs (each strength −5) would show a total neuron strength of −700.

We connected outputs from each neuron to other neurons using a distance-dependent drop-off probability function *P*_{ij}=*1*/*d** ^{2}*, where

We model neuronal activity with systems of ordinary differential equations, following the work of Izhikevich in 2003. First, we define the neuron’s membrane potential, membrane recovery variable, and after-spike reset values as follows:

where the dimensionless variable *v* represents the neuron’s membrane potential and *u* represents the neuron’s membrane recovery variable. Each neuron is assigned the parameters *a, b, c*, and *d*, which govern the intrinsic behaviors and dynamics of the neurons^{30}. For our simulations, we assigned values for *a, b, c*, and *d* such that the behavior of excitatory neurons would be characterized by regular-spiking, consistent with the majority of neurons in the cortex, but still exhibit enough heterogeneity that any two neurons would never display identical dynamics^{30}. For inhibitory neurons, values for *a, b, c*, and *d* were assigned such that both fast-spiking and low-threshold spiking interneurons existed in the simulated system^{30}.

Following^{31}, we applied a random thalamic input to the network of 1Hz, consistent with the mean firing rates of cortical neurons observed *in vivo*^{39}^{,40}. We included the exponential decay of synaptic currents. The rate of membrane potential change was capped (225mV/ms) to avoid unrealistic membrane potentials (>50mV) during a spiking event. When a neuron fired an action potential, the current was injected into output neurons in the next time step (0.2ms later).

We studied neuronal dynamics in networks characterized by different excitatory and inhibitory strengths to identify excitation and inhibition levels that produced similar spiking activity. Intuitively, at different excitation and inhibition levels, a neuron might require fewer or more synchronous inputs to fire an action potential. We examined 10 mean excitatory strengths, from 1 to 10 in unit increments. With a mean excitatory connection strength of 1, a neuron would need to receive approximately 20 synchronous inputs to fire an action potential; with an excitatory connection strength of 10, a neuron would only need to receive two synchronous inputs to fire an action potential^{31}. To complement these 10 excitation levels, we also examined 10 mean inhibitory strengths, from −10 to −1 in increments of unity. To achieve numerical stability and obtain robust results, we performed ten 120s stimulations with 5 steps/ms for each possible combination of mean excitatory strength and mean inhibitory strength. We then analyzed the resultant spiking behavior to measure firing rate and to isolate bursts.

We added autapses, defined as self-loops in the network (mathematically: non-zero elements on the diagonal of the connectivity matrix * A*), to either excitatory or inhibitory neurons using two characteristics of that node’s connections: strength and controllability (Fig. 1C,D). Strength is defined as the sum of the inputs onto and outputs from that neuron. Controllability can be separated into notions of average control and modal control, which are defined in detail in the next section. Here we simply describe these notions intuitively. Average control describes the theoretically predicted preference for the node to push the system into local easily-reachable states, and modal control describes the theoretically predicted preference for the node to push the system into distant difficult-to-reach states. Strength, average control, and modal control provide complementary estimates of the influence a node has on network dynamics.

When adding autapses, we implemented seven targeting strategies, adding autapses to neurons with the (1) highest strength, (2) lowest strength, (3) highest average controllability, (4) lowest average controllability, (5) highest modal controllability, and (6) lowest modal controllability as well as (7) neurons chosen uniformly at random. To construct appropriate null models for our subsequent analyses, we consider that by adding autapses to a neuron, we increased both the number of output connections from and the number of input connections to the selected neuron. We therefore implemented two null models. First, we constructed a null model that accounts for the increase in outputs on autaptic neurons by adding outputs from the would-be autaptic neurons to other neurons in the network in a distance-dependent manner. Second, we constructed a null model that accounts for the increase in inputs on autaptic neurons by adding inputs from other neurons to the would-be autaptic neuron.

In both autaptic network and null models, we added connections (self-loops or non-self-loops, respectively) to between 10% and 100% (in increments of 10%) of either excitatory or inhibitory neurons. Given the relatively rare frequency of autaptic connections *in vivo*, we added small amounts of autaptic or non-autaptic connections (1%, 2%, 3%, 4%, 5%) to the selected neurons. Autaptic connections were added as self-looping connections in proportion to the outputs from a given neuron; e.g., a neuron with 100 separate outputs to other neurons received 3 additional self-looping (autaptic) connections to produce 3% new autaptic connections. Non-autaptic connections followed a similar mapping. To maintain consistency with previous simulations, current was injected from these autaptic connections at the next timestep. To obtain robust results, we completed ten sets of simulations for each combination of excitatory and inhibitory strengths. We performed each of the ten simulations at a given excitation and inhibition level on different original connectivity matrix with no autapses, which we then modified by adding autaptic or non-autaptic connections using the targeting strategies described above. From each modified network, we analyzed spiking activity to better understand the effects of targeted connectivity changes on bursting behavior.

We employ the targeted addition of autapses to neurons to study potential mechanisms by which a network can control its global dynamics. The simplest notion of a node that has a high level of influence on dynamics is the notion of a hub^{41}. A hub is a node that has either many connections emanating from it (high degree), or on average very strong connections emanating from it (high strength) or both. Here because we are studying inherently weighted graphs, we study a neuron’s strength, defined as the sum of the inputs onto and outputs from that neuron. In prior studies, this metric of influence has been shown to be an indirect proxy for controllability^{42}^{,43} and to correlate with statistical measurements of system dynamics^{44}^{,45}^{,46}.

Arguably a more direct measure of influence is one that would consider not just which connections a neuron had, but also how the neuron used them. Philosophically, the notion of *influence* is essentially a *dynamical* notion, implying change in a system’s state. Thus, for a more direct measure of influence, we turned to applications of dynamical systems theory to the problem of network control. In network control theory, one wishes to understand how to drive a networked system from a specified initial state to a specified target state in finite time and with limited energy. The rather nascent field has developed a theoretical framework, analytical results, and statistical tools that can be used to identify control points, which are theoretically predicted to be critical for driving the network’s observed dynamics^{19}^{,47}.

Traditionally utilized to study technological, robotic, and mechanical systems, network control theory offers a particularly appealing conceptual and mathematical framework in which to study neural systems^{42}^{,48}^{,49}. In this context, control points in the network are neurons that are predicted to be critical for driving large-scale neural dynamics. To identify these control neurons, we implement a linearized generalization of nonlinear models of cortical activity^{50}^{,51}. Specifically, we used a noise-free linear discrete-time and time-invariant model of network dynamics^{42},

where **x** describes the state of neurons over time, and **A** is a signed, weighted, directed adjacency matrix whose elements, *A*_{ij}, specify the strength of the connection from node *i* to node *j* (after a normalization to ensure Schur stability). The matrix **B**_{κ} is an input matrix that identifies the control neurons, *κ*={*k*_{1}, …, *k*_{m}}, such that B_{κ}=[*e*_{k1} … *e*_{km}], where *e*_{i} notes the *i*-th canonical vector of dimension N, and N is the number of neurons in the network. The signal u_{κ} is the control input to the control neurons.

Using this model, we can define two distinct controllability strategies: the average controllability and the modal controllability, which – as mentioned earlier – describe the ability to push a system into local easily-reachable states or into distant difficult-to-reach states, respectively. To define the notion of average controllability, we first write down the controllability Gramian, **W**_{κ}, as

where T indicates a matrix transpose and τ is a constant ranging from 0 to infinity. Then, average controllability is defined as the trace of the inverse of the controllability Gramian Trace (**W**_{κ}^{−1}), but for computational purposes can also be approximated via Trace **(W**_{κ}^{−1}) (see ref. ^{42}). Thus, to identify nodes with the highest average controllability, we select nodes that maximize Trace **(W**_{κ}). Since the trace is a linear mapping and is invariant under cyclic permutations, we note that

where represents the *i-*th diagonal entry of the matrix . To maximize the trace, we chose the set of nodes, κ, containing the largest diagonal entries of If **A** is stable, then * X*= is the solution to the discrete-time Lyapunv equation,

To complement the notion of average control, we also define modal controllability, which is highest in nodes that can steer the system toward difficult-to-reach states. Modal controllability is calculated from the eigenvector matrix * V*=[

We assign each neuron a ranked value between 1 and N based on their ϕ value, with 1 being the neuron with the lowest modal controllability (lowest ϕ) and N being the neuron with the highest modal controllability (highest ϕ).

Now that we have defined strategies to target the addition of autapses to neurons, we wish to understand their role in controlling global network dynamics. We therefore define several summary statistics of neural dynamics including firing rate and network-wide bursts, which are coordinated firing events across large numbers of neurons within a brief time period. Note that other complementary definitions of what consists a network-wide burst can be found in the literature, and we briefly review them in the Supplement.

Quantitatively, we define network-wide bursts as periods of activity in which the number of neurons firing at the same time met or exceeded a threshold level of 40% of neurons in a millisecond. We implemented a 5ms tolerance in the burst detection algorithm, combining two groups of neurons into a single burst if they fired within 5ms of each other. This burst detection algorithm was robust to changes in the threshold level of neurons that must be active for a period of activity to be considered a burst (Supp. Fig. 1).

After defining bursts, we calculated the mean and standard deviation of three summary statistics for each simulation: the burst frequency, the interburst-interval, and the burst duration. We defined the burst frequency to be the mean number of bursts per second across the simulation. We calculated the mean interburst-interval (IBI) from the temporal midpoints of the bursts in each simulation. Finally, we defined the mean burst duration as the fraction of simulation time spent in the network-wide bursting state. To summarize these results across simulations, we calculated either an unweighted mean or standard deviation (for burst frequency) or a weighted mean or standard deviation (for burst duration and average IBI). We used a weighted mean for the second two statistics because each simulation was built on a different connectivity matrix, and therefore could display different bursting parameters. For the IBI and burst duration, we computed the weighted average, *X*_{weighted}, as follows:

where *x*_{i} is the data value (the average IBI or burst duration from the individual simulation), the weight *w*_{i}=1/*σ*_{i}^2, *σ*_{i} is the standard deviation of *x*_{i}, and *n* is the number of data values (number of independent simulations). We also calculated the weighted standard deviation

for the IBI and burst duration across simulations. Together, these parameters described the bursting behavior of the networks with and without autapses.

We used JMP Pro 11 (SAS Institute Inc.) for all statistical analyses. To identify autaptic conditions where the addition of autapses induced significant changes in burst frequency, we performed a mixed-model ANOVA using the Full Factorial Repeated Measures ANOVA Add-In (https://community.jmp.com/docs/DOC-6993, Julian Harris, SAS employee). Each original, non-autaptic connectivity matrix was treated as a “subject.” The between-subjects factor was the type of connection added (i.e., autapses, non-autaptic inputs, or non-autaptic outputs) while the within-subjects factor was the percent connections added, either autaptic or non-autaptic connections. When the interaction effect between the type of connection added and the percent connections added was significant, we performed post-hoc analyses using Tukey’s HSD test. These post-hoc results were used to identify targeting strategies and simulation parameters in which adding autapses significantly changed burst frequency from that of the original non-autaptic network and from the appropriate null models. Separate statistical analyses were completed for each of the seven targeting strategies and for each possible pair of excitatory and inhibitory strength values.

To identify differences in burst frequency induced by adding connections according to the seven targeting strategies, we again performed a mixed-model ANOVA using the Full Factorial Repeated Measures ANOVA Add-In (https://community.jmp.com/docs/DOC-6993, Julian Harris, SAS employee) where each original connectivity matrix was a “subject.” Here, the within-subjects factor was again the percent connections added. The between-subjects factor was the targeting strategy used to select neurons to which to add autaptic or non-autaptic connections. When there were significant effects of the interaction between the type of connection added and the targeting strategy, we performed post-hoc analyses using Tukey’s HSD test. These post-hoc results were used to identify the simulation parameters between which the burst frequency significantly differed. Additionally, when there were significant effects of the targeting strategy, we performed post-hoc analyses to identify which targeting strategies were significantly different from one another. In the main text, we report results comparing three targeting strategies: neurons chosen by highest average controllability, highest modal controllability, and uniformly at random. In the supplement, we show results for all seven targeting strategies (see Supp. Fig. 2). Separate statistical analyses were performed for each type of connection added within each excitatory/inhibitory strength combination.

We modeled networks of cortical neurons by placing excitatory and inhibitory neurons on the surface of a sphere (Fig. 1A) and connecting them – with no autapses – in a distance-dependent manner (Fig. 1B). We next modified these networks by adding autapses (Fig. 1C), or self-loops, to groups of excitatory or inhibitory neurons chosen using certain characteristics of their local network neighborhood. We also constructed null model networks by adding input or output connections on the selected group of neurons to account for the increase in the number of connections in a network in which autapses were added (Fig. 1C). These null models were used to test whether the observed dynamical changes were simply due to the increase in connections in the autaptic networks or were more interestingly due to the autaptic nature of the connections specifically. We study the effect of autapses on several summary statistics of bursting behavior (Fig. 1D). See Materials and Methods for additional details.

We observed no change in dynamics if we simulated from 120seconds to 3600seconds of neural activity in the network. Similarly, we observed no change in the dynamics across networks of different size (1,000–10,000 neurons; see Supplemental Fig. 2). Therefore, we simulated 120s of activity for networks with different levels of mean excitatory and inhibitory strength (Fig. 2). Based on the amount of excitation and inhibition present in the network, we identified three distinct regimes of neural dynamics. At low excitation levels, independent of inhibition level simulated, network-wide bursting never occurred (Fig. 2B,i). Activity in this regime was asynchronous and dominated by noise. As the excitation level increased, activity became less dependent on noise and network-wide bursts occurred more frequently and more regularly (Fig. 2B,ii). At high excitation levels, the network entered a chronic bursting regime (Fig. 2B,iii). The decrease in burst frequency at very high excitatory and very low inhibitory strengths is due to this transition to chronic bursting; burst frequency is decreasing as burst duration is increasing (see Supp. Fig. 3).

Controllability describes the potential to drive a dynamical system from an initial state to a desired final state given that inputs are applied to one or more nodes in the network (Fig. 3A). The importance of individual nodes in driving the system to certain states can be quantified using distinct control strategies. Two commonly studied control strategies are average control and modal control. Nodes with high average controllability are theoretically predicted (by a simplified model of linear system dynamics) to drive the system to many energetically easy-to-reach states (Fig. 3B). Nodes with high modal controllability are theoretically predicted (again, by a simplified model of linear system dynamics) to drive the system to many difficult-to-reach states (Fig. 3C).

In the simulated networks, we found that average and modal controllability were directly related to node strength in excitatory neurons (Fig. 3D,E). In the excitatory population, we observe a positive correlation between average controllability and the total neuron strength, and a negative correlation between modal controllability and the total neuron strength. Moreover, we found these associations between controllability and neuron strength were driven primarily by the output strength of each neuron in the network. These relationships are consistent with those observed in undirected networks representing large-scale white matter connectivity in the human brain^{42}^{,48}.

Interestingly, the relationship between controllability and strength in the inhibitory neurons was less clear, as we observed a non-trivial dependence between the controllability statistics and the balance between excitation and inhibition in the network. Specifically, when the mean inhibitory strength was larger than the mean excitatory strength, excitatory neurons displayed lower average and higher modal controllability values, while inhibitory neurons displayed higher average and lower modal controllability values. In contrast, when the excitatory strength was larger than the inhibitory strength, excitatory neurons tended to display higher average and lower modal controllability values than inhibitory neurons. As the difference between the magnitudes of excitatory and inhibitory strength increased, the separation between controllability values of the excitatory and inhibitory neurons became more prevalent.

We added varying amounts of autapses or non-autaptic connections to a specified fraction of randomly selected excitatory or inhibitory neurons throughout the network (Fig. 4). At lower excitatory strengths, burst frequency increased with both the percent of autaptic neurons and the number of autapses added to autaptic neurons. However, adding non-autaptic connections in the null model simulations did not increase burst frequency (Fig. 4A). At higher excitatory strengths, burst frequency again increased with the percent of autaptic neurons and the number of autapses added (Fig. 4B). Here, unlike at lower excitation levels, adding non-autaptic connections in the null model simulations also increased bursting frequency; however, higher levels of bursting still occurred in the autaptic network compared to the null model networks.

We can summarize the data described above by computing the fraction of autaptic conditions (fraction of neurons x amount of autapses; see Methods) within each excitation and inhibition combination that had significantly different burst frequencies from those observed in the original non-autaptic system and from the burst frequencies of the corresponding null model systems (Fig. 4C). We observed that adding autapses to excitatory neurons induces greater changes in burst frequency than adding autapses to inhibitory neurons. Moreover, we observed that, when adding autapses to excitatory neurons, the level of excitation more strongly affects changes in the network’s bursting behavior than the level of inhibition.

Next we asked how these results depended on the number of autaptic connections that were added to the network. Networks constructed with more autapses displayed an increased burst frequency and burst regularity compared to networks constructed with fewer autapses (Fig. 5A). Interestingly, the relationship between burst frequency and number of autapses was modulated by the mean excitation strength of the system. At lower excitation levels, more autapses were needed to induce the same differences in burst frequency observed at higher excitation levels.

Importantly, these results do not address the question of whether the important driver of bursting dynamics is simply the number of autaptic connections, or whether the more fundamental parameter is the number of autapses per neuron. To directly address this question, we computed, for each autaptic condition (fraction of neurons x amount of autapses), the fraction of the excitatory/inhibitory strength combinations with burst frequencies that were significantly different from baseline and from the input and output null models. Again, we observed larger fraction of differences in bursting dynamics due to the addition of autapses to excitatory rather than to inhibitory neurons (Fig. 5B). We also observed that the amount of autapses added to an excitatory neuron played a larger role in driving the increase in burst frequency than the fraction of excitatory neurons in the network that were autaptic (Fig. 5B, bar graphs). These results demonstrate the effect of autapses on network dynamics is nonlinear because the same number of autaptic connections added to fewer (more) neurons has a greater (lesser) impact on burst frequency.

After studying the effect of autapses added to neurons chosen uniformly at random, we next asked whether we could target autapses to specific “control” neurons to increase burst frequency even further. To address this question, we examine bursting dynamics when autapses are added to either excitatory or inhibitory neurons with either the highest average or highest modal controllability values (for definitions, see Methods). At a mean excitatory strength of 5, adding autapses to excitatory neurons with the highest average controllability resulted in higher burst frequencies at certain autaptic conditions than when autapses were added according to the highest modal controllability (Fig. 6A). At a stronger excitatory level of 7, although there was a significant interaction effect between the amount of connections added and the targeting strategy, we did not observe significant differences between corresponding autapse levels of average and modal controllability (Fig. 6B).

To better understand the impact of excitatory and inhibitory strength values on these results, we calculated the number of significant differences in burst frequency between average and modal controllability targeting strategies that occurred across the 11 excitatory and inhibitory strength combinations (Fig. 6C). We observed a maximum of 5 differences, which can be explained by Fig. 5D where we see that significant pairwise differences between bursting dynamics observed in different targeting strategies only occurred at excitation/inhibition levels of 5/−2, 5/−6, 5/−10, 7/−10 and 9/−10. Additionally, the majority of significant differences between targeting strategies occurred when autapses were added to less than half of the excitatory neurons (Fig. 6D). As we added autapses to increasingly more neurons according to these two opposing targeting strategies, the overlap between the groups of neurons that targeting strategies selected increased, making the rules more similar and leading to fewer observed differences in bursting dynamics.

Results from all targeting strategies are shown in Supplementary Information. No targeting strategy or interaction effect was observed when we added autapses to inhibitory neurons.

Here we examine the relationship between theoretical measures of structural controllability and observed measures of network dynamics. We build on a well-developed numerical simulation of cortical and hippocampal network dynamics to study the influence of autaptic connections on bursting frequency and regularity. Autaptic connections are represented as self-loops in the network and present unique control features whose impact on neuronal network dynamics is unknown. We show that these self-loops differentially influence network dynamics: when applied to excitatory (but not inhibitory) neurons, these self-loops lower the threshold for network bursting. Directing self-loops to nodes of high average controllability, which are theoretically predicted to effectively move the system into local easily-reachable states, leads to an increase in the frequency and regularity of network-wide bursts. Together, these results suggest a role of autaptic connections in controlling network-wide bursts in diverse cortical and subcortical regions of mammalian brain.

In our network, the most salient outputs are the appearance of bursting or synchronization of the network, and the corresponding interburst intervals, that appear over time. Synchronization of brain networks is often considered to be key for learning^{52}, memory^{53}^{,54}, and other higher-order cognitive processes^{55}^{,56}^{,57}. In contrast, sporadic or sustained bursting can lead to the development of pathological networks in diseases such as epilepsy^{58}. Our findings show that bursting will appear over some, but not all, combinations of excitatory/inhibitory synaptic strength combinations. Our results describing a broad class of bursting types are consistent with previous models showing a dynamic range of activity in neuronal systems, including the coherent activity observed in health and the abnormal activity observed in disease. These results further add to the literature by demonstrating that the observed dynamics (burst frequency and regularity) are directly driven by the underlying network connectivity and synaptic weights between neurons. These networks were designed to model only local microcircuit architectures with no delay among neurons in the network. Our findings provide insight into how these local self-loops can regulate the neural dynamics of these microcircuits. A critical feature for the influence of self-loops is that the circuits exist near or above the transition for bursting behavior. At low synaptic strength levels, adding autaptic connections did not elicit a bursting phenomenon because the network simply required more synaptic input than the autaptic connections provided to fire. At or near the transition for bursting, we found that spreading autaptic connections among a number of excitatory neurons affected the output neural dynamics (bursting) more significantly than concentrating many autapses to a smaller number of neurons. From a network perspective, this general result indicates that drivers of network behavior exist preferentially at the level of single nodes, rather than at the level of single edges within the network. The observation of single node drivers persisted across networks of different sizes, whereas adding edges would affect the relative transition to bursting behavior.

Physiological estimates of autaptic connections in excitatory neurons rarely exceed 1–2% of neurons within a network^{59}, while some interneurons can display significant levels of self-inhibition^{22}^{,23}^{,24}. It is interesting to note that we observed significant transitions network-wide behavior when our simulations extended beyond these physiological conditions. These data support the plausible intuition that neuronal networks *in vivo* operate at an optimal point for shifting network dynamics by the deletion or the addition of only a few self-loops, supporting maximal flexibility or dynamic range. This type of self-loop modulation might occur as a function of synaptic pruning that is common during neuronal development, which can work to consolidate network dynamics towards a stable equilibrium point^{60}. An alternative potential mechanism for self-loop formation is sprouting, commonly observed after injury, which could transform a low activity network into a highly active network with periods of synchronization^{61}. The impact of these dynamics are less clear and depend on the frequency of the neuronal activity, with an enhancement of activity potentially promoting prosurvival signaling through the nuclear activation of antioxidant signaling pathways^{62}^{,63}. Alternatively, extensive aberrant sprouting could drive the network into a state of overexcitation, which could in turn lead to targeted neurodegeneration from chronic, seizure-like bursting of the network.

A key question we explored was how the nonlinear dynamics of this commonly studied network were influenced by the patterns of structural connections surrounding single neurons. To gain an understanding of this relationship, we drew from the field of structural controllability^{64}^{,65}^{,66}: a subfield of control and dynamical systems theory that offers predictions of which nodes in a network might act as control points under the assumptions of a simplified linear dynamics. We asked whether these predictions offered fundamental utility in understanding the complex behaviors of neuronal networks. We observed that for a range of excitatory and inhibitory synaptic strengths, the structural network change elicited by adding autapses to putative control points in the network increased burst frequency and regularity, to a much greater degree than adding autapses to neurons chosen uniformly at random. These results demonstrate that the predictions of control points derived from a simplified linear model of neuronal network dynamics are supported by observed changes in network dynamics, consistent with reported results at larger spatial scales^{49}. It will be interesting in future to study the role of alternative control strategies (including boundary controllability^{19}^{,42}) in the other areas of the excitation/inhibition phase space characterized by other network behaviors including either continuous bursting or the lack of bursting.

Although we observed correspondence between measures of controllability and the resulting dynamical behavior of networks, we recognize the activation and bursting phenomenon that appears in the networks is a nonlinear process. As the linear approximation of inherent nonlinear systems is under active study, we found many corresponding connections between linear control theory and bursting behavior, but these were not complete. For example, moving a network that is already bursting into a higher bursting state may be viewed as an easily approachable new state, and control theory would predict nodes with high average controllability would be ideal for moving the network into this new state. This is consistent with our observations, as excitatory neurons represented neurons of high average controllability in these networks and adding autaptic connections specifically to these neurons moved the network into a higher bursting state. Likewise, a network that is currently not bursting can reach another easily reachable non-bursting state by adding autaptic connections to inhibitory neurons, as these neurons represent a majority of the neurons with high average controllability. In comparison, shifting a network into a difficult to reach state – e.g., moving a bursting network into a non-bursting network – suggests that neurons with high modal controllability would be the likely targets. However, in such a network, the inhibitory neurons represented nearly all of the nodes with high modal controllability and adding autaptic connections did little to affect the dynamics. Although this may highlight one gap in using linear control theory to predict nonlinear dynamics of neural networks, it is worth noting that we could shift bursting networks into nonbursting networks at higher levels of inhibitory synaptic strength, suggesting at least a regime of the network where the predictions align across the two domains.

There are several important limitations to this work that could be explored in future studies. First, these simulations do not provide more detailed mechanisms of network remodeling (e.g., spike timing dependent plasticity, homeostatic plasticity, presynaptic facilitation) that may affect the temporal evolution of bursting that can occur by including more detailed models of synaptic currents. However, we do not anticipate that these more detailed features of the model would affect our general result of changes in bursting dynamics associated with changes in underlying network architectures. Second, we also examined a range of excitatory and inhibitory synaptic strengths used in past studies, and found that the principal change in dynamical behavior was mediated through the excitatory neurons. Extending the current work into regimes where the inhibitory synaptic strength also influences the bursting behavior of the network would test the robustness of our observations for network controllability where alterations in either excitatory or inhibitory strength would lead to changes in the neural dynamics. Third, we directly addressed the question of whether predictions of control points in the network derived from a simplified linear model of neuronal dynamics could be used to understand the nonlinear dynamics of the full model. It would be interesting in future to develop and apply techniques from nonlinear control theory to further understand the mapping between control points and observed network dynamics. Indeed, studying nonlinear control strategies could offer particular utility in extending these examinations to the study of the switch timing of transcriptional activators and repressors within genetic circuits that code for neuronal function. Finally, the notions of average and modal controllability are agnostic to the specific initial and final states of the system, offering predictions based on the ensemble of local easily-reachable states (average controllability), and the ensemble of distance difficult-to-reach states (modal controllability). It could be interesting in future work to study specific transitions of the neuronal network from a specified initial state of activation to a specified final state of activation, potentially offering insights into the finite set of transitions that a network is expected to display under normal operating conditions^{48}^{,67}.

In this study, we focus predominantly on descriptive statistics and simple predictors of future network performance and dynamic behavior. However, the time is ripe for the field of network neuroscience to take the next step in the *de novo* design of networks theoretically optimized for specific types of computations. These design efforts capitalizing on generative modeling frameworks would be especially important for understanding the different structure-function mappings observed across different regions of cortex, as well as in health *versus* disease, and to posit therapeutic interventions for network reconfiguration and recovery. We anticipate that the targeted placement of autaptic connections will be an important dimension of these solutions, as well as more generally being critical for our understanding of the dynamics observed in translation, transcription, and gene regulatory networks.

**How to cite this article:** Wiles, L. *et al*. Autaptic Connections Shift Network Excitability and Bursting. *Sci. Rep.*
**7**, 44006; doi: 10.1038/srep44006 (2017).

**Publisher's note:** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Funding was provided by US Department of Health & Human Services (NIH NS088176, NS093293) and the Army Research Office (ARO) - W911NF-10-1-0526. DSB and SG acknowledge support from the Office of Naval Research, and from the National Science Foundation award #BCS-1441502, and #BCS-1631550.

The authors declare no competing financial interests.

**Author Contributions** L.W., B.P., and D.G. completed the simulations, analyzed the data, and participated in writing the manuscript and constructing the figures. S.G. and F.P. contributed algorithms for calculating controllability and provided suggestions for figure development. D.S.B. and D.M. planned the study, participated in the analysis and interpretation, contributed to the writing and revising of the manuscript, and contributed to the figure development. All authors reviewed the manuscript.

- Kim D. A. et al. . Social network targeting to maximise population behaviour change: a cluster randomised controlled trial. The Lancet 386, 145–153 (2015). [PMC free article] [PubMed]
- Hawrylycz M. et al. . Canonical genetic signatures of the adult human brain. Nature neuroscience 18, 1832–1844 (2015). [PMC free article] [PubMed]
- Bader J. S. New connections, new components, real dynamics. Science signaling 2, pe48 (2009). [PMC free article] [PubMed]
- Bassett D. S., Brown J. A., Deshpande V., Carlson J. M. & Grafton S. T. Conserved and variable architecture of human white matter connectivity. Neuroimage 54, 1262–1279 (2011). [PubMed]
- Sporns O. Cerebral cartography and connectomics. Phil. Trans. R. Soc. B 370, 20140173 (2015). [PMC free article] [PubMed]
- Sporns O. Contributions and challenges for network models in cognitive neuroscience. Nature neuroscience 17, 652–660 (2014). [PubMed]
- Medaglia J. D., Lynall M.-E. & Bassett D. S. Cognitive network neuroscience. Journal of cognitive neuroscience (2015). [PMC free article] [PubMed]
- Rubinov M. & Bullmore E. Fledgling pathoconnectomics of psychiatric disorders. Trends in cognitive sciences 17, 641–647 (2013). [PubMed]
- Fornito A., Zalesky A., Pantelis C. & Bullmore E. T. Schizophrenia, neuroimaging and connectomics. Neuroimage 62, 2296–2314 (2012). [PubMed]
- Stam C. J. Modern network science of neurological disorders. Nature Reviews Neuroscience 15, 683–695 (2014). [PubMed]
- Bassett D. S. & Bullmore E. T. Human brain networks in health and disease. Current opinion in neurology 22, 340–347, doi: (2009).10.1097/WCO.0b013e32832d93dd [PMC free article] [PubMed] [Cross Ref]
- He Y., Chen Z. J. & Evans A. C. Small-world anatomical networks in the human brain revealed by cortical thickness from MRI. Cereb Cortex 17, 2407–2419, doi: (2007).10.1093/cercor/bhl149 [PubMed] [Cross Ref]
- Roxin A., Riecke H. & Solla S. A. Self-sustained activity in a small-world network of excitable neurons. Phys Rev Lett 92, 198101, doi: (2004).10.1103/PhysRevLett.92.198101 [PubMed] [Cross Ref]
- Lago-Fernandez L. F., Huerta R., Corbacho F. & Siguenza J. A. Fast response and temporal coherent oscillations in small-world networks. Phys Rev Lett 84, 2758–2761, doi: (2000).10.1103/PhysRevLett.84.2758 [PubMed] [Cross Ref]
- Hahn G., Bujan A. F., Fregnac Y., Aertsen A. & Kumar A. Communication through resonance in spiking neuronal networks. PLoS Comput Biol 10, e1003811, doi: (2014).10.1371/journal.pcbi.1003811 [PMC free article] [PubMed] [Cross Ref]
- Riecke H., Roxin A., Madruga S. & Solla S. A. Multiple attractors, long chaotic transients, and failure in small-world networks of excitable neurons. Chaos 17, 026110, doi: (2007).10.1063/1.2743611 [PubMed] [Cross Ref]
- Perin R., Berger T. K. & Markram H. A synaptic organizing principle for cortical neuronal groups. Proc Natl Acad Sci USA 108, 5419–5424, doi: (2011).10.1073/pnas.1016051108 [PubMed] [Cross Ref]
- Simon H. A. In Facets of systems science 457–476 (Springer, 1991).
- Pasqualetti F., Zampieri S. & Bullo F. Controllability metrics, limitations and algorithms for complex networks. IEEE Transactions on Control of Network Systems 1, 40–52 (2014).
- Lübke J., Markram H., Frotscher M. & Sakmann B. Frequency and dendritic distribution of autapses established by layer 5 pyramidal neurons in the developing rat neocortex: comparison with synaptic innervation of adjacent neurons of the same class. The Journal of neuroscience 16, 3209–3218 (1996). [PubMed]
- Tamas G., Buhl E. H. & Somogyi P. Massive autaptic self-innervation of GABAergic neurons in cat visual cortex. The Journal of neuroscience 17, 6352–6364 (1997). [PubMed]
- Cobb S. et al. . Synaptic effects of identified interneurons innervating both interneurons and pyramidal cells in the rat hippocampus. Neuroscience 79, 629–648 (1997). [PubMed]
- Bacci A., Huguenard J. R. & Prince D. A. Functional autaptic neurotransmission in fast-spiking interneurons: a novel form of feedback inhibition in the neocortex. The Journal of neuroscience 23, 859–866 (2003). [PubMed]
- Bacci A., Rudolph U., Huguenard J. R. & Prince D. A. Major differences in inhibitory synaptic transmission onto two neocortical interneuron subclasses. The Journal of neuroscience 23, 9664–9674 (2003). [PubMed]
- Megias M., Emri Z., Freund T. F. & Gulyas A. I. Total number and distribution of inhibitory and excitatory synapses on hippocampal CA1 pyramidal cells. Neuroscience 102, 527–540 (2001). [PubMed]
- Rusin C. G., Johnson S. E., Kapur J. & Hudson J. L. Engineering the synchronization of neuron action potentials using global time-delayed feedback stimulation. Physical Review E 84, 066202 (2011). [PubMed]
- Wang H., Ma J., Chen Y. & Chen Y. Effect of an autapse on the firing pattern transition in a bursting neuron. Communications in Nonlinear Science and Numerical Simulation 19, 3242–3254 (2014).
- Wang H., Wang L., Chen Y. & Chen Y. Effect of autaptic activity on the response of a Hodgkin-Huxley neuron. Chaos: An Interdisciplinary Journal of Nonlinear Science 24, 033122 (2014). [PubMed]
- Hashemi M., Valizadeh A. & Azizi Y. Effect of duration of synaptic activity on spike rate of a Hodgkin-Huxley neuron with delayed feedback. Physical Review E 85, 021917 (2012). [PubMed]
- Izhikevich E. M. Simple model of spiking neurons. IEEE Transactions on neural networks 14, 1569–1572 (2003). [PubMed]
- Izhikevich E. M. Polychronization: computation with spikes. Neural computation 18, 245–282 (2006). [PubMed]
- Soriano J., Martínez M. R., Tlusty T. & Moses E. Development of input connections in neural cultures. Proceedings of the National Academy of Sciences 105, 13758–13763 (2008). [PubMed]
- Ercsey-Ravasz M. et al. . A predictive network model of cerebral cortical connectivity based on a distance rule. Neuron 80, 184–197 (2013). [PMC free article] [PubMed]
- Song S., Sjöström P. J., Reigl M., Nelson S. & Chklovskii D. B. Highly nonrandom features of synaptic connectivity in local cortical circuits. PLoS Biol 3, e68 (2005). [PubMed]
- Galán R. F. On how network architecture determines the dominant patterns of spontaneous neural activity. PLoS One 3, e2148 (2008). [PMC free article] [PubMed]
- Richardson M. J., Melamed O., Silberberg G., Gerstner W. & Markram H. Short-term synaptic plasticity orchestrates the response of pyramidal cells and interneurons to population bursts. Journal of computational neuroscience 18, 323–331 (2005). [PubMed]
- Pfister J.-P. & Gerstner W. Triplets of spikes in a model of spike timing-dependent plasticity. The Journal of neuroscience 26, 9673–9682 (2006). [PubMed]
- Turova T. S. & Villa A. E. On a phase diagram for random neural networks with embedded spike timing dependent plasticity. Biosystems 89, 280–286 (2007). [PubMed]
- Silver R. A. Neuronal arithmetic. Nature Reviews Neuroscience 11, 474–489 (2010). [PMC free article] [PubMed]
- Buzsaki G. & Mizuseki K. The log-dynamic brain: how skewed distributions affect network operations. Nature reviews. Neuroscience 15, 264–278, doi: (2014).10.1038/nrn3687 [PMC free article] [PubMed] [Cross Ref]
- van den Heuvel M. P. & Sporns O. Network hubs in the human brain. Trends Cogn Sci 17, 683–696, doi: (2013).10.1016/j.tics.2013.09.012 [PubMed] [Cross Ref]
- Gu S. et al. . Controllability of structural brain networks. Nature communications 6 (2015). [PMC free article] [PubMed]
- Hamdan A. & Nayfeh A. Measures of modal controllability and observability for first-and second-order linear systems. Journal of guidance, control, and dynamics 12, 421–428 (1989).
- Buckner R. L. et al. . Cortical hubs revealed by intrinsic functional connectivity: mapping, assessment of stability, and relation to Alzheimer’s disease. The Journal of neuroscience: the official journal of the Society for Neuroscience 29, 1860–1873, doi: (2009).10.1523/JNEUROSCI.5062-08.2009 [PMC free article] [PubMed] [Cross Ref]
- Achard S., Salvador R., Whitcher B., Suckling J. & Bullmore E. A resilient, low-frequency, small-world human brain functional network with highly connected association cortical hubs. The Journal of neuroscience: the official journal of the Society for Neuroscience 26, 63–72, doi: (2006).10.1523/JNEUROSCI.3874-05.2006 [PubMed] [Cross Ref]
- Hagmann P. et al. . Mapping the structural core of human cerebral cortex. PLoS Biol 6, e159, doi: (2008).10.1371/journal.pbio.0060159 [PubMed] [Cross Ref]
- Liu Y.-Y., Slotine J.-J. & Barabási A.-L. Controllability of complex networks. Nature 473, 167–173 (2011). [PubMed]
- Betzel R. F., Gu S., Medaglia J. D., Pasqualetti F. & Bassett D. S. Optimally controlling the human connectome: the role of network topology. Sci Rep 6, 30770, doi: (2016).10.1038/srep30770 [PMC free article] [PubMed] [Cross Ref]
- Muldoon S. F. et al. . Stimulation-Based Control of Dynamic Brain Networks. PLoS Comput Biol 12, e1005076, doi: (2016).10.1371/journal.pcbi.1005076 [PMC free article] [PubMed] [Cross Ref]
- Honey C. et al. . Predicting human resting-state functional connectivity from structural connectivity. Proceedings of the National Academy of Sciences 106, 2035–2040 (2009). [PubMed]
- Galán R. F., Ermentrout G. B. & Urban N. N. Optimal time scale for spike-time reliability: theory, simulations, and experiments. Journal of neurophysiology 99, 277–283 (2008). [PMC free article] [PubMed]
- Puig M. V., Antzoulatos E. G. & Miller E. K. Prefrontal dopamine in associative learning and memory. Neuroscience 282, 217–229 (2014). [PMC free article] [PubMed]
- Hanslmayr S., Staresina B. P. & Bowman H. Oscillations and Episodic Memory: Addressing the Synchronization/Desynchronization Conundrum. Trends in neurosciences 39, 16–25 (2016). [PMC free article] [PubMed]
- Bastos A. M., Vezoli J. & Fries P. Communication through coherence with inter-areal delays. Current opinion in neurobiology 31, 173–180 (2015). [PubMed]
- Clayton M. S., Yeung N. & Kadosh R. C. The roles of cortical oscillations in sustained attention. Trends in cognitive sciences 19, 188–195 (2015). [PubMed]
- Fries P. Rhythms for cognition: communication through coherence. Neuron 88, 220–235 (2015). [PMC free article] [PubMed]
- Engel A. K., Roelfsema P., Fries P., Brecht M. & Singer W. Role of the temporal domain for response selection and perceptual binding. Cerebral Cortex 7, 571–582 (1997). [PubMed]
- Khambhati A. N. et al. . Dynamic network drivers of seizure generation, propagation and termination in human neocortical epilepsy. PLoS Comput Biol 11, e1004608 (2015). [PMC free article] [PubMed]
- Megıas M., Emri Z., Freund T. & Gulyas A. Total number and distribution of inhibitory and excitatory synapses on hippocampal CA1 pyramidal cells. Neuroscience 102, 527–540 (2001). [PubMed]
- Luhmann H. J. et al. . Spontaneous Neuronal Activity in Developing Neocortical Networks: From Single Cells to Large-Scale Interactions. Frontiers in neural circuits 10 (2016). [PMC free article] [PubMed]
- Maier I. C. & Schwab M. E. Sprouting, regeneration and circuit formation in the injured spinal cord: factors and activity. Philosophical Transactions of the Royal Society B: Biological Sciences 361, 1611–1634 (2006). [PMC free article] [PubMed]
- Bading H. Nuclear calcium signalling in the regulation of brain function. Nature Reviews Neuroscience 14, 593–608 (2013). [PubMed]
- Wyllie D., Livesey M. & Hardingham G. Influence of GluN2 subunit identity on NMDA receptor function. Neuropharmacology 74, 4–17 (2013). [PMC free article] [PubMed]
- Reinschke K. J. Multivariable control: A graph theoretic approach. (1988).
- Kailath T. Linear systems. Vol. 156 (Prentice-Hall Englewood Cliffs, NJ, 1980).
- Kalman R., Ho Y. & Narendra K. Contributions to differential equations. New York, NY, USA: Interscience 1, 189–213 (1963).
- Gu S. et al. . Optimal Trajectories of Brain State Transitions. ArXiv e-prints 1607 http://adsabs.harvard.edu/abs/2016arXiv160701706G (2016).

Articles from Scientific Reports are provided here courtesy of **Nature Publishing Group**

PubMed Central Canada is a service of the Canadian Institutes of Health Research (CIHR) working in partnership with the National Research Council's national science library in cooperation with the National Center for Biotechnology Information at the U.S. National Library of Medicine(NCBI/NLM). It includes content provided to the PubMed Central International archive by participating publishers. |