PMCC PMCC

Search tips
Search criteria

Advanced
Results 1-25 (1693487)

Clipboard (0)
None

Related Articles

1.  25th Annual Computational Neuroscience Meeting: CNS-2016 
Sharpee, Tatyana O. | Destexhe, Alain | Kawato, Mitsuo | Sekulić, Vladislav | Skinner, Frances K. | Wójcik, Daniel K. | Chintaluri, Chaitanya | Cserpán, Dorottya | Somogyvári, Zoltán | Kim, Jae Kyoung | Kilpatrick, Zachary P. | Bennett, Matthew R. | Josić, Kresimir | Elices, Irene | Arroyo, David | Levi, Rafael | Rodriguez, Francisco B. | Varona, Pablo | Hwang, Eunjin | Kim, Bowon | Han, Hio-Been | Kim, Tae | McKenna, James T. | Brown, Ritchie E. | McCarley, Robert W. | Choi, Jee Hyun | Rankin, James | Popp, Pamela Osborn | Rinzel, John | Tabas, Alejandro | Rupp, André | Balaguer-Ballester, Emili | Maturana, Matias I. | Grayden, David B. | Cloherty, Shaun L. | Kameneva, Tatiana | Ibbotson, Michael R. | Meffin, Hamish | Koren, Veronika | Lochmann, Timm | Dragoi, Valentin | Obermayer, Klaus | Psarrou, Maria | Schilstra, Maria | Davey, Neil | Torben-Nielsen, Benjamin | Steuber, Volker | Ju, Huiwen | Yu, Jiao | Hines, Michael L. | Chen, Liang | Yu, Yuguo | Kim, Jimin | Leahy, Will | Shlizerman, Eli | Birgiolas, Justas | Gerkin, Richard C. | Crook, Sharon M. | Viriyopase, Atthaphon | Memmesheimer, Raoul-Martin | Gielen, Stan | Dabaghian, Yuri | DeVito, Justin | Perotti, Luca | Kim, Anmo J. | Fenk, Lisa M. | Cheng, Cheng | Maimon, Gaby | Zhao, Chang | Widmer, Yves | Sprecher, Simon | Senn, Walter | Halnes, Geir | Mäki-Marttunen, Tuomo | Keller, Daniel | Pettersen, Klas H. | Andreassen, Ole A. | Einevoll, Gaute T. | Yamada, Yasunori | Steyn-Ross, Moira L. | Alistair Steyn-Ross, D. | Mejias, Jorge F. | Murray, John D. | Kennedy, Henry | Wang, Xiao-Jing | Kruscha, Alexandra | Grewe, Jan | Benda, Jan | Lindner, Benjamin | Badel, Laurent | Ohta, Kazumi | Tsuchimoto, Yoshiko | Kazama, Hokto | Kahng, B. | Tam, Nicoladie D. | Pollonini, Luca | Zouridakis, George | Soh, Jaehyun | Kim, DaeEun | Yoo, Minsu | Palmer, S. E. | Culmone, Viviana | Bojak, Ingo | Ferrario, Andrea | Merrison-Hort, Robert | Borisyuk, Roman | Kim, Chang Sub | Tezuka, Taro | Joo, Pangyu | Rho, Young-Ah | Burton, Shawn D. | Bard Ermentrout, G. | Jeong, Jaeseung | Urban, Nathaniel N. | Marsalek, Petr | Kim, Hoon-Hee | Moon, Seok-hyun | Lee, Do-won | Lee, Sung-beom | Lee, Ji-yong | Molkov, Yaroslav I. | Hamade, Khaldoun | Teka, Wondimu | Barnett, William H. | Kim, Taegyo | Markin, Sergey | Rybak, Ilya A. | Forro, Csaba | Dermutz, Harald | Demkó, László | Vörös, János | Babichev, Andrey | Huang, Haiping | Verduzco-Flores, Sergio | Dos Santos, Filipa | Andras, Peter | Metzner, Christoph | Schweikard, Achim | Zurowski, Bartosz | Roach, James P. | Sander, Leonard M. | Zochowski, Michal R. | Skilling, Quinton M. | Ognjanovski, Nicolette | Aton, Sara J. | Zochowski, Michal | Wang, Sheng-Jun | Ouyang, Guang | Guang, Jing | Zhang, Mingsha | Michael Wong, K. Y. | Zhou, Changsong | Robinson, Peter A. | Sanz-Leon, Paula | Drysdale, Peter M. | Fung, Felix | Abeysuriya, Romesh G. | Rennie, Chris J. | Zhao, Xuelong | Choe, Yoonsuck | Yang, Huei-Fang | Mi, Yuanyuan | Lin, Xiaohan | Wu, Si | Liedtke, Joscha | Schottdorf, Manuel | Wolf, Fred | Yamamura, Yoriko | Wickens, Jeffery R. | Rumbell, Timothy | Ramsey, Julia | Reyes, Amy | Draguljić, Danel | Hof, Patrick R. | Luebke, Jennifer | Weaver, Christina M. | He, Hu | Yang, Xu | Ma, Hailin | Xu, Zhiheng | Wang, Yuzhe | Baek, Kwangyeol | Morris, Laurel S. | Kundu, Prantik | Voon, Valerie | Agnes, Everton J. | Vogels, Tim P. | Podlaski, William F. | Giese, Martin | Kuravi, Pradeep | Vogels, Rufin | Seeholzer, Alexander | Podlaski, William | Ranjan, Rajnish | Vogels, Tim | Torres, Joaquin J. | Baroni, Fabiano | Latorre, Roberto | Gips, Bart | Lowet, Eric | Roberts, Mark J. | de Weerd, Peter | Jensen, Ole | van der Eerden, Jan | Goodarzinick, Abdorreza | Niry, Mohammad D. | Valizadeh, Alireza | Pariz, Aref | Parsi, Shervin S. | Warburton, Julia M. | Marucci, Lucia | Tamagnini, Francesco | Brown, Jon | Tsaneva-Atanasova, Krasimira | Kleberg, Florence I. | Triesch, Jochen | Moezzi, Bahar | Iannella, Nicolangelo | Schaworonkow, Natalie | Plogmacher, Lukas | Goldsworthy, Mitchell R. | Hordacre, Brenton | McDonnell, Mark D. | Ridding, Michael C. | Zapotocky, Martin | Smit, Daniel | Fouquet, Coralie | Trembleau, Alain | Dasgupta, Sakyasingha | Nishikawa, Isao | Aihara, Kazuyuki | Toyoizumi, Taro | Robb, Daniel T. | Mellen, Nick | Toporikova, Natalia | Tang, Rongxiang | Tang, Yi-Yuan | Liang, Guangsheng | Kiser, Seth A. | Howard, James H. | Goncharenko, Julia | Voronenko, Sergej O. | Ahamed, Tosif | Stephens, Greg | Yger, Pierre | Lefebvre, Baptiste | Spampinato, Giulia Lia Beatrice | Esposito, Elric | et Olivier Marre, Marcel Stimberg | Choi, Hansol | Song, Min-Ho | Chung, SueYeon | Lee, Dan D. | Sompolinsky, Haim | Phillips, Ryan S. | Smith, Jeffrey | Chatzikalymniou, Alexandra Pierri | Ferguson, Katie | Alex Cayco Gajic, N. | Clopath, Claudia | Angus Silver, R. | Gleeson, Padraig | Marin, Boris | Sadeh, Sadra | Quintana, Adrian | Cantarelli, Matteo | Dura-Bernal, Salvador | Lytton, William W. | Davison, Andrew | Li, Luozheng | Zhang, Wenhao | Wang, Dahui | Song, Youngjo | Park, Sol | Choi, Ilhwan | Shin, Hee-sup | Choi, Hannah | Pasupathy, Anitha | Shea-Brown, Eric | Huh, Dongsung | Sejnowski, Terrence J. | Vogt, Simon M. | Kumar, Arvind | Schmidt, Robert | Van Wert, Stephen | Schiff, Steven J. | Veale, Richard | Scheutz, Matthias | Lee, Sang Wan | Gallinaro, Júlia | Rotter, Stefan | Rubchinsky, Leonid L. | Cheung, Chung Ching | Ratnadurai-Giridharan, Shivakeshavan | Shomali, Safura Rashid | Ahmadabadi, Majid Nili | Shimazaki, Hideaki | Nader Rasuli, S. | Zhao, Xiaochen | Rasch, Malte J. | Wilting, Jens | Priesemann, Viola | Levina, Anna | Rudelt, Lucas | Lizier, Joseph T. | Spinney, Richard E. | Rubinov, Mikail | Wibral, Michael | Bak, Ji Hyun | Pillow, Jonathan | Zaho, Yuan | Park, Il Memming | Kang, Jiyoung | Park, Hae-Jeong | Jang, Jaeson | Paik, Se-Bum | Choi, Woochul | Lee, Changju | Song, Min | Lee, Hyeonsu | Park, Youngjin | Yilmaz, Ergin | Baysal, Veli | Ozer, Mahmut | Saska, Daniel | Nowotny, Thomas | Chan, Ho Ka | Diamond, Alan | Herrmann, Christoph S. | Murray, Micah M. | Ionta, Silvio | Hutt, Axel | Lefebvre, Jérémie | Weidel, Philipp | Duarte, Renato | Morrison, Abigail | Lee, Jung H. | Iyer, Ramakrishnan | Mihalas, Stefan | Koch, Christof | Petrovici, Mihai A. | Leng, Luziwei | Breitwieser, Oliver | Stöckel, David | Bytschok, Ilja | Martel, Roman | Bill, Johannes | Schemmel, Johannes | Meier, Karlheinz | Esler, Timothy B. | Burkitt, Anthony N. | Kerr, Robert R. | Tahayori, Bahman | Nolte, Max | Reimann, Michael W. | Muller, Eilif | Markram, Henry | Parziale, Antonio | Senatore, Rosa | Marcelli, Angelo | Skiker, K. | Maouene, M. | Neymotin, Samuel A. | Seidenstein, Alexandra | Lakatos, Peter | Sanger, Terence D. | Menzies, Rosemary J. | McLauchlan, Campbell | van Albada, Sacha J. | Kedziora, David J. | Neymotin, Samuel | Kerr, Cliff C. | Suter, Benjamin A. | Shepherd, Gordon M. G. | Ryu, Juhyoung | Lee, Sang-Hun | Lee, Joonwon | Lee, Hyang Jung | Lim, Daeseob | Wang, Jisung | Lee, Heonsoo | Jung, Nam | Anh Quang, Le | Maeng, Seung Eun | Lee, Tae Ho | Lee, Jae Woo | Park, Chang-hyun | Ahn, Sora | Moon, Jangsup | Choi, Yun Seo | Kim, Juhee | Jun, Sang Beom | Lee, Seungjun | Lee, Hyang Woon | Jo, Sumin | Jun, Eunji | Yu, Suin | Goetze, Felix | Lai, Pik-Yin | Kim, Seonghyun | Kwag, Jeehyun | Jang, Hyun Jae | Filipović, Marko | Reig, Ramon | Aertsen, Ad | Silberberg, Gilad | Bachmann, Claudia | Buttler, Simone | Jacobs, Heidi | Dillen, Kim | Fink, Gereon R. | Kukolja, Juraj | Kepple, Daniel | Giaffar, Hamza | Rinberg, Dima | Shea, Steven | Koulakov, Alex | Bahuguna, Jyotika | Tetzlaff, Tom | Kotaleski, Jeanette Hellgren | Kunze, Tim | Peterson, Andre | Knösche, Thomas | Kim, Minjung | Kim, Hojeong | Park, Ji Sung | Yeon, Ji Won | Kim, Sung-Phil | Kang, Jae-Hwan | Lee, Chungho | Spiegler, Andreas | Petkoski, Spase | Palva, Matias J. | Jirsa, Viktor K. | Saggio, Maria L. | Siep, Silvan F. | Stacey, William C. | Bernar, Christophe | Choung, Oh-hyeon | Jeong, Yong | Lee, Yong-il | Kim, Su Hyun | Jeong, Mir | Lee, Jeungmin | Kwon, Jaehyung | Kralik, Jerald D. | Jahng, Jaehwan | Hwang, Dong-Uk | Kwon, Jae-Hyung | Park, Sang-Min | Kim, Seongkyun | Kim, Hyoungkyu | Kim, Pyeong Soo | Yoon, Sangsup | Lim, Sewoong | Park, Choongseok | Miller, Thomas | Clements, Katie | Ahn, Sungwoo | Ji, Eoon Hye | Issa, Fadi A. | Baek, JeongHun | Oba, Shigeyuki | Yoshimoto, Junichiro | Doya, Kenji | Ishii, Shin | Mosqueiro, Thiago S. | Strube-Bloss, Martin F. | Smith, Brian | Huerta, Ramon | Hadrava, Michal | Hlinka, Jaroslav | Bos, Hannah | Helias, Moritz | Welzig, Charles M. | Harper, Zachary J. | Kim, Won Sup | Shin, In-Seob | Baek, Hyeon-Man | Han, Seung Kee | Richter, René | Vitay, Julien | Beuth, Frederick | Hamker, Fred H. | Toppin, Kelly | Guo, Yixin | Graham, Bruce P. | Kale, Penelope J. | Gollo, Leonardo L. | Stern, Merav | Abbott, L. F. | Fedorov, Leonid A. | Giese, Martin A. | Ardestani, Mohammad Hovaidi | Faraji, Mohammad Javad | Preuschoff, Kerstin | Gerstner, Wulfram | van Gendt, Margriet J. | Briaire, Jeroen J. | Kalkman, Randy K. | Frijns, Johan H. M. | Lee, Won Hee | Frangou, Sophia | Fulcher, Ben D. | Tran, Patricia H. P. | Fornito, Alex | Gliske, Stephen V. | Lim, Eugene | Holman, Katherine A. | Fink, Christian G. | Kim, Jinseop S. | Mu, Shang | Briggman, Kevin L. | Sebastian Seung, H. | Wegener, Detlef | Bohnenkamp, Lisa | Ernst, Udo A. | Devor, Anna | Dale, Anders M. | Lines, Glenn T. | Edwards, Andy | Tveito, Aslak | Hagen, Espen | Senk, Johanna | Diesmann, Markus | Schmidt, Maximilian | Bakker, Rembrandt | Shen, Kelly | Bezgin, Gleb | Hilgetag, Claus-Christian | van Albada, Sacha Jennifer | Sun, Haoqi | Sourina, Olga | Huang, Guang-Bin | Klanner, Felix | Denk, Cornelia | Glomb, Katharina | Ponce-Alvarez, Adrián | Gilson, Matthieu | Ritter, Petra | Deco, Gustavo | Witek, Maria A. G. | Clarke, Eric F. | Hansen, Mads | Wallentin, Mikkel | Kringelbach, Morten L. | Vuust, Peter | Klingbeil, Guido | De Schutter, Erik | Chen, Weiliang | Zang, Yunliang | Hong, Sungho | Takashima, Akira | Zamora, Criseida | Gallimore, Andrew R. | Goldschmidt, Dennis | Manoonpong, Poramate | Karoly, Philippa J. | Freestone, Dean R. | Soundry, Daniel | Kuhlmann, Levin | Paninski, Liam | Cook, Mark | Lee, Jaejin | Fishman, Yonatan I. | Cohen, Yale E. | Roberts, James A. | Cocchi, Luca | Sweeney, Yann | Lee, Soohyun | Jung, Woo-Sung | Kim, Youngsoo | Jung, Younginha | Song, Yoon-Kyu | Chavane, Frédéric | Soman, Karthik | Muralidharan, Vignesh | Srinivasa Chakravarthy, V. | Shivkumar, Sabyasachi | Mandali, Alekhya | Pragathi Priyadharsini, B. | Mehta, Hima | Davey, Catherine E. | Brinkman, Braden A. W. | Kekona, Tyler | Rieke, Fred | Buice, Michael | De Pittà, Maurizio | Berry, Hugues | Brunel, Nicolas | Breakspear, Michael | Marsat, Gary | Drew, Jordan | Chapman, Phillip D. | Daly, Kevin C. | Bradle, Samual P. | Seo, Sat Byul | Su, Jianzhong | Kavalali, Ege T. | Blackwell, Justin | Shiau, LieJune | Buhry, Laure | Basnayake, Kanishka | Lee, Sue-Hyun | Levy, Brandon A. | Baker, Chris I. | Leleu, Timothée | Philips, Ryan T. | Chhabria, Karishma
BMC Neuroscience  2016;17(Suppl 1):54.
Table of contents
A1 Functional advantages of cell-type heterogeneity in neural circuits
Tatyana O. Sharpee
A2 Mesoscopic modeling of propagating waves in visual cortex
Alain Destexhe
A3 Dynamics and biomarkers of mental disorders
Mitsuo Kawato
F1 Precise recruitment of spiking output at theta frequencies requires dendritic h-channels in multi-compartment models of oriens-lacunosum/moleculare hippocampal interneurons
Vladislav Sekulić, Frances K. Skinner
F2 Kernel methods in reconstruction of current sources from extracellular potentials for single cells and the whole brains
Daniel K. Wójcik, Chaitanya Chintaluri, Dorottya Cserpán, Zoltán Somogyvári
F3 The synchronized periods depend on intracellular transcriptional repression mechanisms in circadian clocks.
Jae Kyoung Kim, Zachary P. Kilpatrick, Matthew R. Bennett, Kresimir Josić
O1 Assessing irregularity and coordination of spiking-bursting rhythms in central pattern generators
Irene Elices, David Arroyo, Rafael Levi, Francisco B. Rodriguez, Pablo Varona
O2 Regulation of top-down processing by cortically-projecting parvalbumin positive neurons in basal forebrain
Eunjin Hwang, Bowon Kim, Hio-Been Han, Tae Kim, James T. McKenna, Ritchie E. Brown, Robert W. McCarley, Jee Hyun Choi
O3 Modeling auditory stream segregation, build-up and bistability
James Rankin, Pamela Osborn Popp, John Rinzel
O4 Strong competition between tonotopic neural ensembles explains pitch-related dynamics of auditory cortex evoked fields
Alejandro Tabas, André Rupp, Emili Balaguer-Ballester
O5 A simple model of retinal response to multi-electrode stimulation
Matias I. Maturana, David B. Grayden, Shaun L. Cloherty, Tatiana Kameneva, Michael R. Ibbotson, Hamish Meffin
O6 Noise correlations in V4 area correlate with behavioral performance in visual discrimination task
Veronika Koren, Timm Lochmann, Valentin Dragoi, Klaus Obermayer
O7 Input-location dependent gain modulation in cerebellar nucleus neurons
Maria Psarrou, Maria Schilstra, Neil Davey, Benjamin Torben-Nielsen, Volker Steuber
O8 Analytic solution of cable energy function for cortical axons and dendrites
Huiwen Ju, Jiao Yu, Michael L. Hines, Liang Chen, Yuguo Yu
O9 C. elegans interactome: interactive visualization of Caenorhabditis elegans worm neuronal network
Jimin Kim, Will Leahy, Eli Shlizerman
O10 Is the model any good? Objective criteria for computational neuroscience model selection
Justas Birgiolas, Richard C. Gerkin, Sharon M. Crook
O11 Cooperation and competition of gamma oscillation mechanisms
Atthaphon Viriyopase, Raoul-Martin Memmesheimer, Stan Gielen
O12 A discrete structure of the brain waves
Yuri Dabaghian, Justin DeVito, Luca Perotti
O13 Direction-specific silencing of the Drosophila gaze stabilization system
Anmo J. Kim, Lisa M. Fenk, Cheng Lyu, Gaby Maimon
O14 What does the fruit fly think about values? A model of olfactory associative learning
Chang Zhao, Yves Widmer, Simon Sprecher,Walter Senn
O15 Effects of ionic diffusion on power spectra of local field potentials (LFP)
Geir Halnes, Tuomo Mäki-Marttunen, Daniel Keller, Klas H. Pettersen,Ole A. Andreassen, Gaute T. Einevoll
O16 Large-scale cortical models towards understanding relationship between brain structure abnormalities and cognitive deficits
Yasunori Yamada
O17 Spatial coarse-graining the brain: origin of minicolumns
Moira L. Steyn-Ross, D. Alistair Steyn-Ross
O18 Modeling large-scale cortical networks with laminar structure
Jorge F. Mejias, John D. Murray, Henry Kennedy, Xiao-Jing Wang
O19 Information filtering by partial synchronous spikes in a neural population
Alexandra Kruscha, Jan Grewe, Jan Benda, Benjamin Lindner
O20 Decoding context-dependent olfactory valence in Drosophila
Laurent Badel, Kazumi Ohta, Yoshiko Tsuchimoto, Hokto Kazama
P1 Neural network as a scale-free network: the role of a hub
B. Kahng
P2 Hemodynamic responses to emotions and decisions using near-infrared spectroscopy optical imaging
Nicoladie D. Tam
P3 Phase space analysis of hemodynamic responses to intentional movement directions using functional near-infrared spectroscopy (fNIRS) optical imaging technique
Nicoladie D.Tam, Luca Pollonini, George Zouridakis
P4 Modeling jamming avoidance of weakly electric fish
Jaehyun Soh, DaeEun Kim
P5 Synergy and redundancy of retinal ganglion cells in prediction
Minsu Yoo, S. E. Palmer
P6 A neural field model with a third dimension representing cortical depth
Viviana Culmone, Ingo Bojak
P7 Network analysis of a probabilistic connectivity model of the Xenopus tadpole spinal cord
Andrea Ferrario, Robert Merrison-Hort, Roman Borisyuk
P8 The recognition dynamics in the brain
Chang Sub Kim
P9 Multivariate spike train analysis using a positive definite kernel
Taro Tezuka
P10 Synchronization of burst periods may govern slow brain dynamics during general anesthesia
Pangyu Joo
P11 The ionic basis of heterogeneity affects stochastic synchrony
Young-Ah Rho, Shawn D. Burton, G. Bard Ermentrout, Jaeseung Jeong, Nathaniel N. Urban
P12 Circular statistics of noise in spike trains with a periodic component
Petr Marsalek
P14 Representations of directions in EEG-BCI using Gaussian readouts
Hoon-Hee Kim, Seok-hyun Moon, Do-won Lee, Sung-beom Lee, Ji-yong Lee, Jaeseung Jeong
P15 Action selection and reinforcement learning in basal ganglia during reaching movements
Yaroslav I. Molkov, Khaldoun Hamade, Wondimu Teka, William H. Barnett, Taegyo Kim, Sergey Markin, Ilya A. Rybak
P17 Axon guidance: modeling axonal growth in T-Junction assay
Csaba Forro, Harald Dermutz, László Demkó, János Vörös
P19 Transient cell assembly networks encode persistent spatial memories
Yuri Dabaghian, Andrey Babichev
P20 Theory of population coupling and applications to describe high order correlations in large populations of interacting neurons
Haiping Huang
P21 Design of biologically-realistic simulations for motor control
Sergio Verduzco-Flores
P22 Towards understanding the functional impact of the behavioural variability of neurons
Filipa Dos Santos, Peter Andras
P23 Different oscillatory dynamics underlying gamma entrainment deficits in schizophrenia
Christoph Metzner, Achim Schweikard, Bartosz Zurowski
P24 Memory recall and spike frequency adaptation
James P. Roach, Leonard M. Sander, Michal R. Zochowski
P25 Stability of neural networks and memory consolidation preferentially occur near criticality
Quinton M. Skilling, Nicolette Ognjanovski, Sara J. Aton, Michal Zochowski
P26 Stochastic Oscillation in Self-Organized Critical States of Small Systems: Sensitive Resting State in Neural Systems
Sheng-Jun Wang, Guang Ouyang, Jing Guang, Mingsha Zhang, K. Y. Michael Wong, Changsong Zhou
P27 Neurofield: a C++ library for fast simulation of 2D neural field models
Peter A. Robinson, Paula Sanz-Leon, Peter M. Drysdale, Felix Fung, Romesh G. Abeysuriya, Chris J. Rennie, Xuelong Zhao
P28 Action-based grounding: Beyond encoding/decoding in neural code
Yoonsuck Choe, Huei-Fang Yang
P29 Neural computation in a dynamical system with multiple time scales
Yuanyuan Mi, Xiaohan Lin, Si Wu
P30 Maximum entropy models for 3D layouts of orientation selectivity
Joscha Liedtke, Manuel Schottdorf, Fred Wolf
P31 A behavioral assay for probing computations underlying curiosity in rodents
Yoriko Yamamura, Jeffery R. Wickens
P32 Using statistical sampling to balance error function contributions to optimization of conductance-based models
Timothy Rumbell, Julia Ramsey, Amy Reyes, Danel Draguljić, Patrick R. Hof, Jennifer Luebke, Christina M. Weaver
P33 Exploration and implementation of a self-growing and self-organizing neuron network building algorithm
Hu He, Xu Yang, Hailin Ma, Zhiheng Xu, Yuzhe Wang
P34 Disrupted resting state brain network in obese subjects: a data-driven graph theory analysis
Kwangyeol Baek, Laurel S. Morris, Prantik Kundu, Valerie Voon
P35 Dynamics of cooperative excitatory and inhibitory plasticity
Everton J. Agnes, Tim P. Vogels
P36 Frequency-dependent oscillatory signal gating in feed-forward networks of integrate-and-fire neurons
William F. Podlaski, Tim P. Vogels
P37 Phenomenological neural model for adaptation of neurons in area IT
Martin Giese, Pradeep Kuravi, Rufin Vogels
P38 ICGenealogy: towards a common topology of neuronal ion channel function and genealogy in model and experiment
Alexander Seeholzer, William Podlaski, Rajnish Ranjan, Tim Vogels
P39 Temporal input discrimination from the interaction between dynamic synapses and neural subthreshold oscillations
Joaquin J. Torres, Fabiano Baroni, Roberto Latorre, Pablo Varona
P40 Different roles for transient and sustained activity during active visual processing
Bart Gips, Eric Lowet, Mark J. Roberts, Peter de Weerd, Ole Jensen, Jan van der Eerden
P41 Scale-free functional networks of 2D Ising model are highly robust against structural defects: neuroscience implications
Abdorreza Goodarzinick, Mohammad D. Niry, Alireza Valizadeh
P42 High frequency neuron can facilitate propagation of signal in neural networks
Aref Pariz, Shervin S. Parsi, Alireza Valizadeh
P43 Investigating the effect of Alzheimer’s disease related amyloidopathy on gamma oscillations in the CA1 region of the hippocampus
Julia M. Warburton, Lucia Marucci, Francesco Tamagnini, Jon Brown, Krasimira Tsaneva-Atanasova
P44 Long-tailed distributions of inhibitory and excitatory weights in a balanced network with eSTDP and iSTDP
Florence I. Kleberg, Jochen Triesch
P45 Simulation of EMG recording from hand muscle due to TMS of motor cortex
Bahar Moezzi, Nicolangelo Iannella, Natalie Schaworonkow, Lukas Plogmacher, Mitchell R. Goldsworthy, Brenton Hordacre, Mark D. McDonnell, Michael C. Ridding, Jochen Triesch
P46 Structure and dynamics of axon network formed in primary cell culture
Martin Zapotocky, Daniel Smit, Coralie Fouquet, Alain Trembleau
P47 Efficient signal processing and sampling in random networks that generate variability
Sakyasingha Dasgupta, Isao Nishikawa, Kazuyuki Aihara, Taro Toyoizumi
P48 Modeling the effect of riluzole on bursting in respiratory neural networks
Daniel T. Robb, Nick Mellen, Natalia Toporikova
P49 Mapping relaxation training using effective connectivity analysis
Rongxiang Tang, Yi-Yuan Tang
P50 Modeling neuron oscillation of implicit sequence learning
Guangsheng Liang, Seth A. Kiser, James H. Howard, Jr., Yi-Yuan Tang
P51 The role of cerebellar short-term synaptic plasticity in the pathology and medication of downbeat nystagmus
Julia Goncharenko, Neil Davey, Maria Schilstra, Volker Steuber
P52 Nonlinear response of noisy neurons
Sergej O. Voronenko, Benjamin Lindner
P53 Behavioral embedding suggests multiple chaotic dimensions underlie C. elegans locomotion
Tosif Ahamed, Greg Stephens
P54 Fast and scalable spike sorting for large and dense multi-electrodes recordings
Pierre Yger, Baptiste Lefebvre, Giulia Lia Beatrice Spampinato, Elric Esposito, Marcel Stimberg et Olivier Marre
P55 Sufficient sampling rates for fast hand motion tracking
Hansol Choi, Min-Ho Song
P56 Linear readout of object manifolds
SueYeon Chung, Dan D. Lee, Haim Sompolinsky
P57 Differentiating models of intrinsic bursting and rhythm generation of the respiratory pre-Bötzinger complex using phase response curves
Ryan S. Phillips, Jeffrey Smith
P58 The effect of inhibitory cell network interactions during theta rhythms on extracellular field potentials in CA1 hippocampus
Alexandra Pierri Chatzikalymniou, Katie Ferguson, Frances K. Skinner
P59 Expansion recoding through sparse sampling in the cerebellar input layer speeds learning
N. Alex Cayco Gajic, Claudia Clopath, R. Angus Silver
P60 A set of curated cortical models at multiple scales on Open Source Brain
Padraig Gleeson, Boris Marin, Sadra Sadeh, Adrian Quintana, Matteo Cantarelli, Salvador Dura-Bernal, William W. Lytton, Andrew Davison, R. Angus Silver
P61 A synaptic story of dynamical information encoding in neural adaptation
Luozheng Li, Wenhao Zhang, Yuanyuan Mi, Dahui Wang, Si Wu
P62 Physical modeling of rule-observant rodent behavior
Youngjo Song, Sol Park, Ilhwan Choi, Jaeseung Jeong, Hee-sup Shin
P64 Predictive coding in area V4 and prefrontal cortex explains dynamic discrimination of partially occluded shapes
Hannah Choi, Anitha Pasupathy, Eric Shea-Brown
P65 Stability of FORCE learning on spiking and rate-based networks
Dongsung Huh, Terrence J. Sejnowski
P66 Stabilising STDP in striatal neurons for reliable fast state recognition in noisy environments
Simon M. Vogt, Arvind Kumar, Robert Schmidt
P67 Electrodiffusion in one- and two-compartment neuron models for characterizing cellular effects of electrical stimulation
Stephen Van Wert, Steven J. Schiff
P68 STDP improves speech recognition capabilities in spiking recurrent circuits parameterized via differential evolution Markov Chain Monte Carlo
Richard Veale, Matthias Scheutz
P69 Bidirectional transformation between dominant cortical neural activities and phase difference distributions
Sang Wan Lee
P70 Maturation of sensory networks through homeostatic structural plasticity
Júlia Gallinaro, Stefan Rotter
P71 Corticothalamic dynamics: structure, number of solutions and stability of steady-state solutions in the space of synaptic couplings
Paula Sanz-Leon, Peter A. Robinson
P72 Optogenetic versus electrical stimulation of the parkinsonian basal ganglia. Computational study
Leonid L. Rubchinsky, Chung Ching Cheung, Shivakeshavan Ratnadurai-Giridharan
P73 Exact spike-timing distribution reveals higher-order interactions of neurons
Safura Rashid Shomali, Majid Nili Ahmadabadi, Hideaki Shimazaki, S. Nader Rasuli
P74 Neural mechanism of visual perceptual learning using a multi-layered neural network
Xiaochen Zhao, Malte J. Rasch
P75 Inferring collective spiking dynamics from mostly unobserved systems
Jens Wilting, Viola Priesemann
P76 How to infer distributions in the brain from subsampled observations
Anna Levina, Viola Priesemann
P77 Influences of embedding and estimation strategies on the inferred memory of single spiking neurons
Lucas Rudelt, Joseph T. Lizier, Viola Priesemann
P78 A nearest-neighbours based estimator for transfer entropy between spike trains
Joseph T. Lizier, Richard E. Spinney, Mikail Rubinov, Michael Wibral, Viola Priesemann
P79 Active learning of psychometric functions with multinomial logistic models
Ji Hyun Bak, Jonathan Pillow
P81 Inferring low-dimensional network dynamics with variational latent Gaussian process
Yuan Zaho, Il Memming Park
P82 Computational investigation of energy landscapes in the resting state subcortical brain network
Jiyoung Kang, Hae-Jeong Park
P83 Local repulsive interaction between retinal ganglion cells can generate a consistent spatial periodicity of orientation map
Jaeson Jang, Se-Bum Paik
P84 Phase duration of bistable perception reveals intrinsic time scale of perceptual decision under noisy condition
Woochul Choi, Se-Bum Paik
P85 Feedforward convergence between retina and primary visual cortex can determine the structure of orientation map
Changju Lee, Jaeson Jang, Se-Bum Paik
P86 Computational method classifying neural network activity patterns for imaging data
Min Song, Hyeonsu Lee, Se-Bum Paik
P87 Symmetry of spike-timing-dependent-plasticity kernels regulates volatility of memory
Youngjin Park, Woochul Choi, Se-Bum Paik
P88 Effects of time-periodic coupling strength on the first-spike latency dynamics of a scale-free network of stochastic Hodgkin-Huxley neurons
Ergin Yilmaz, Veli Baysal, Mahmut Ozer
P89 Spectral properties of spiking responses in V1 and V4 change within the trial and are highly relevant for behavioral performance
Veronika Koren, Klaus Obermayer
P90 Methods for building accurate models of individual neurons
Daniel Saska, Thomas Nowotny
P91 A full size mathematical model of the early olfactory system of honeybees
Ho Ka Chan, Alan Diamond, Thomas Nowotny
P92 Stimulation-induced tuning of ongoing oscillations in spiking neural networks
Christoph S. Herrmann, Micah M. Murray, Silvio Ionta, Axel Hutt, Jérémie Lefebvre
P93 Decision-specific sequences of neural activity in balanced random networks driven by structured sensory input
Philipp Weidel, Renato Duarte, Abigail Morrison
P94 Modulation of tuning induced by abrupt reduction of SST cell activity
Jung H. Lee, Ramakrishnan Iyer, Stefan Mihalas
P95 The functional role of VIP cell activation during locomotion
Jung H. Lee, Ramakrishnan Iyer, Christof Koch, Stefan Mihalas
P96 Stochastic inference with spiking neural networks
Mihai A. Petrovici, Luziwei Leng, Oliver Breitwieser, David Stöckel, Ilja Bytschok, Roman Martel, Johannes Bill, Johannes Schemmel, Karlheinz Meier
P97 Modeling orientation-selective electrical stimulation with retinal prostheses
Timothy B. Esler, Anthony N. Burkitt, David B. Grayden, Robert R. Kerr, Bahman Tahayori, Hamish Meffin
P98 Ion channel noise can explain firing correlation in auditory nerves
Bahar Moezzi, Nicolangelo Iannella, Mark D. McDonnell
P99 Limits of temporal encoding of thalamocortical inputs in a neocortical microcircuit
Max Nolte, Michael W. Reimann, Eilif Muller, Henry Markram
P100 On the representation of arm reaching movements: a computational model
Antonio Parziale, Rosa Senatore, Angelo Marcelli
P101 A computational model for investigating the role of cerebellum in acquisition and retention of motor behavior
Rosa Senatore, Antonio Parziale, Angelo Marcelli
P102 The emergence of semantic categories from a large-scale brain network of semantic knowledge
K. Skiker, M. Maouene
P103 Multiscale modeling of M1 multitarget pharmacotherapy for dystonia
Samuel A. Neymotin, Salvador Dura-Bernal, Alexandra Seidenstein, Peter Lakatos, Terence D. Sanger, William W. Lytton
P104 Effect of network size on computational capacity
Salvador Dura-Bernal, Rosemary J. Menzies, Campbell McLauchlan, Sacha J. van Albada, David J. Kedziora, Samuel Neymotin, William W. Lytton, Cliff C. Kerr
P105 NetPyNE: a Python package for NEURON to facilitate development and parallel simulation of biological neuronal networks
Salvador Dura-Bernal, Benjamin A. Suter, Samuel A. Neymotin, Cliff C. Kerr, Adrian Quintana, Padraig Gleeson, Gordon M. G. Shepherd, William W. Lytton
P107 Inter-areal and inter-regional inhomogeneity in co-axial anisotropy of Cortical Point Spread in human visual areas
Juhyoung Ryu, Sang-Hun Lee
P108 Two bayesian quanta of uncertainty explain the temporal dynamics of cortical activity in the non-sensory areas during bistable perception
Joonwon Lee, Sang-Hun Lee
P109 Optimal and suboptimal integration of sensory and value information in perceptual decision making
Hyang Jung Lee, Sang-Hun Lee
P110 A Bayesian algorithm for phoneme Perception and its neural implementation
Daeseob Lim, Sang-Hun Lee
P111 Complexity of EEG signals is reduced during unconsciousness induced by ketamine and propofol
Jisung Wang, Heonsoo Lee
P112 Self-organized criticality of neural avalanche in a neural model on complex networks
Nam Jung, Le Anh Quang, Seung Eun Maeng, Tae Ho Lee, Jae Woo Lee
P113 Dynamic alterations in connection topology of the hippocampal network during ictal-like epileptiform activity in an in vitro rat model
Chang-hyun Park, Sora Ahn, Jangsup Moon, Yun Seo Choi, Juhee Kim, Sang Beom Jun, Seungjun Lee, Hyang Woon Lee
P114 Computational model to replicate seizure suppression effect by electrical stimulation
Sora Ahn, Sumin Jo, Eunji Jun, Suin Yu, Hyang Woon Lee, Sang Beom Jun, Seungjun Lee
P115 Identifying excitatory and inhibitory synapses in neuronal networks from spike trains using sorted local transfer entropy
Felix Goetze, Pik-Yin Lai
P116 Neural network model for obstacle avoidance based on neuromorphic computational model of boundary vector cell and head direction cell
Seonghyun Kim, Jeehyun Kwag
P117 Dynamic gating of spike pattern propagation by Hebbian and anti-Hebbian spike timing-dependent plasticity in excitatory feedforward network model
Hyun Jae Jang, Jeehyun Kwag
P118 Inferring characteristics of input correlations of cells exhibiting up-down state transitions in the rat striatum
Marko Filipović, Ramon Reig, Ad Aertsen, Gilad Silberberg, Arvind Kumar
P119 Graph properties of the functional connected brain under the influence of Alzheimer’s disease
Claudia Bachmann, Simone Buttler, Heidi Jacobs, Kim Dillen, Gereon R. Fink, Juraj Kukolja, Abigail Morrison
P120 Learning sparse representations in the olfactory bulb
Daniel Kepple, Hamza Giaffar, Dima Rinberg, Steven Shea, Alex Koulakov
P121 Functional classification of homologous basal-ganglia networks
Jyotika Bahuguna,Tom Tetzlaff, Abigail Morrison, Arvind Kumar, Jeanette Hellgren Kotaleski
P122 Short term memory based on multistability
Tim Kunze, Andre Peterson, Thomas Knösche
P123 A physiologically plausible, computationally efficient model and simulation software for mammalian motor units
Minjung Kim, Hojeong Kim
P125 Decoding laser-induced somatosensory information from EEG
Ji Sung Park, Ji Won Yeon, Sung-Phil Kim
P126 Phase synchronization of alpha activity for EEG-based personal authentication
Jae-Hwan Kang, Chungho Lee, Sung-Phil Kim
P129 Investigating phase-lags in sEEG data using spatially distributed time delays in a large-scale brain network model
Andreas Spiegler, Spase Petkoski, Matias J. Palva, Viktor K. Jirsa
P130 Epileptic seizures in the unfolding of a codimension-3 singularity
Maria L. Saggio, Silvan F. Siep, Andreas Spiegler, William C. Stacey, Christophe Bernard, Viktor K. Jirsa
P131 Incremental dimensional exploratory reasoning under multi-dimensional environment
Oh-hyeon Choung, Yong Jeong
P132 A low-cost model of eye movements and memory in personal visual cognition
Yong-il Lee, Jaeseung Jeong
P133 Complex network analysis of structural connectome of autism spectrum disorder patients
Su Hyun Kim, Mir Jeong, Jaeseung Jeong
P134 Cognitive motives and the neural correlates underlying human social information transmission, gossip
Jeungmin Lee, Jaehyung Kwon, Jerald D. Kralik, Jaeseung Jeong
P135 EEG hyperscanning detects neural oscillation for the social interaction during the economic decision-making
Jaehwan Jahng, Dong-Uk Hwang, Jaeseung Jeong
P136 Detecting purchase decision based on hyperfrontality of the EEG
Jae-Hyung Kwon, Sang-Min Park, Jaeseung Jeong
P137 Vulnerability-based critical neurons, synapses, and pathways in the Caenorhabditis elegans connectome
Seongkyun Kim, Hyoungkyu Kim, Jerald D. Kralik, Jaeseung Jeong
P138 Motif analysis reveals functionally asymmetrical neurons in C. elegans
Pyeong Soo Kim, Seongkyun Kim, Hyoungkyu Kim, Jaeseung Jeong
P139 Computational approach to preference-based serial decision dynamics: do temporal discounting and working memory affect it?
Sangsup Yoon, Jaehyung Kwon, Sewoong Lim, Jaeseung Jeong
P141 Social stress induced neural network reconfiguration affects decision making and learning in zebrafish
Choongseok Park, Thomas Miller, Katie Clements, Sungwoo Ahn, Eoon Hye Ji, Fadi A. Issa
P142 Descriptive, generative, and hybrid approaches for neural connectivity inference from neural activity data
JeongHun Baek, Shigeyuki Oba, Junichiro Yoshimoto, Kenji Doya, Shin Ishii
P145 Divergent-convergent synaptic connectivities accelerate coding in multilayered sensory systems
Thiago S. Mosqueiro, Martin F. Strube-Bloss, Brian Smith, Ramon Huerta
P146 Swinging networks
Michal Hadrava, Jaroslav Hlinka
P147 Inferring dynamically relevant motifs from oscillatory stimuli: challenges, pitfalls, and solutions
Hannah Bos, Moritz Helias
P148 Spatiotemporal mapping of brain network dynamics during cognitive tasks using magnetoencephalography and deep learning
Charles M. Welzig, Zachary J. Harper
P149 Multiscale complexity analysis for the segmentation of MRI images
Won Sup Kim, In-Seob Shin, Hyeon-Man Baek, Seung Kee Han
P150 A neuro-computational model of emotional attention
René Richter, Julien Vitay, Frederick Beuth, Fred H. Hamker
P151 Multi-site delayed feedback stimulation in parkinsonian networks
Kelly Toppin, Yixin Guo
P152 Bistability in Hodgkin–Huxley-type equations
Tatiana Kameneva, Hamish Meffin, Anthony N. Burkitt, David B. Grayden
P153 Phase changes in postsynaptic spiking due to synaptic connectivity and short term plasticity: mathematical analysis of frequency dependency
Mark D. McDonnell, Bruce P. Graham
P154 Quantifying resilience patterns in brain networks: the importance of directionality
Penelope J. Kale, Leonardo L. Gollo
P155 Dynamics of rate-model networks with separate excitatory and inhibitory populations
Merav Stern, L. F. Abbott
P156 A model for multi-stable dynamics in action recognition modulated by integration of silhouette and shading cues
Leonid A. Fedorov, Martin A. Giese
P157 Spiking model for the interaction between action recognition and action execution
Mohammad Hovaidi Ardestani, Martin Giese
P158 Surprise-modulated belief update: how to learn within changing environments?
Mohammad Javad Faraji, Kerstin Preuschoff, Wulfram Gerstner
P159 A fast, stochastic and adaptive model of auditory nerve responses to cochlear implant stimulation
Margriet J. van Gendt, Jeroen J. Briaire, Randy K. Kalkman, Johan H. M. Frijns
P160 Quantitative comparison of graph theoretical measures of simulated and empirical functional brain networks
Won Hee Lee, Sophia Frangou
P161 Determining discriminative properties of fMRI signals in schizophrenia using highly comparative time-series analysis
Ben D. Fulcher, Patricia H. P. Tran, Alex Fornito
P162 Emergence of narrowband LFP oscillations from completely asynchronous activity during seizures and high-frequency oscillations
Stephen V. Gliske, William C. Stacey, Eugene Lim, Katherine A. Holman, Christian G. Fink
P163 Neuronal diversity in structure and function: cross-validation of anatomical and physiological classification of retinal ganglion cells in the mouse
Jinseop S. Kim, Shang Mu, Kevin L. Briggman, H. Sebastian Seung, the EyeWirers
P164 Analysis and modelling of transient firing rate changes in area MT in response to rapid stimulus feature changes
Detlef Wegener, Lisa Bohnenkamp, Udo A. Ernst
P165 Step-wise model fitting accounting for high-resolution spatial measurements: construction of a layer V pyramidal cell model with reduced morphology
Tuomo Mäki-Marttunen, Geir Halnes, Anna Devor, Christoph Metzner, Anders M. Dale, Ole A. Andreassen, Gaute T. Einevoll
P166 Contributions of schizophrenia-associated genes to neuron firing and cardiac pacemaking: a polygenic modeling approach
Tuomo Mäki-Marttunen, Glenn T. Lines, Andy Edwards, Aslak Tveito, Anders M. Dale, Gaute T. Einevoll, Ole A. Andreassen
P167 Local field potentials in a 4 × 4 mm2 multi-layered network model
Espen Hagen, Johanna Senk, Sacha J. van Albada, Markus Diesmann
P168 A spiking network model explains multi-scale properties of cortical dynamics
Maximilian Schmidt, Rembrandt Bakker, Kelly Shen, Gleb Bezgin, Claus-Christian Hilgetag, Markus Diesmann, Sacha Jennifer van Albada
P169 Using joint weight-delay spike-timing dependent plasticity to find polychronous neuronal groups
Haoqi Sun, Olga Sourina, Guang-Bin Huang, Felix Klanner, Cornelia Denk
P170 Tensor decomposition reveals RSNs in simulated resting state fMRI
Katharina Glomb, Adrián Ponce-Alvarez, Matthieu Gilson, Petra Ritter, Gustavo Deco
P171 Getting in the groove: testing a new model-based method for comparing task-evoked vs resting-state activity in fMRI data on music listening
Matthieu Gilson, Maria AG Witek, Eric F. Clarke, Mads Hansen, Mikkel Wallentin, Gustavo Deco, Morten L. Kringelbach, Peter Vuust
P172 STochastic engine for pathway simulation (STEPS) on massively parallel processors
Guido Klingbeil, Erik De Schutter
P173 Toolkit support for complex parallel spatial stochastic reaction–diffusion simulation in STEPS
Weiliang Chen, Erik De Schutter
P174 Modeling the generation and propagation of Purkinje cell dendritic spikes caused by parallel fiber synaptic input
Yunliang Zang, Erik De Schutter
P175 Dendritic morphology determines how dendrites are organized into functional subunits
Sungho Hong, Akira Takashima, Erik De Schutter
P176 A model of Ca2+/calmodulin-dependent protein kinase II activity in long term depression at Purkinje cells
Criseida Zamora, Andrew R. Gallimore, Erik De Schutter
P177 Reward-modulated learning of population-encoded vectors for insect-like navigation in embodied agents
Dennis Goldschmidt, Poramate Manoonpong, Sakyasingha Dasgupta
P178 Data-driven neural models part II: connectivity patterns of human seizures
Philippa J. Karoly, Dean R. Freestone, Daniel Soundry, Levin Kuhlmann, Liam Paninski, Mark Cook
P179 Data-driven neural models part I: state and parameter estimation
Dean R. Freestone, Philippa J. Karoly, Daniel Soundry, Levin Kuhlmann, Mark Cook
P180 Spectral and spatial information processing in human auditory streaming
Jaejin Lee, Yonatan I. Fishman, Yale E. Cohen
P181 A tuning curve for the global effects of local perturbations in neural activity: Mapping the systems-level susceptibility of the brain
Leonardo L. Gollo, James A. Roberts, Luca Cocchi
P182 Diverse homeostatic responses to visual deprivation mediated by neural ensembles
Yann Sweeney, Claudia Clopath
P183 Opto-EEG: a novel method for investigating functional connectome in mouse brain based on optogenetics and high density electroencephalography
Soohyun Lee, Woo-Sung Jung, Jee Hyun Choi
P184 Biphasic responses of frontal gamma network to repetitive sleep deprivation during REM sleep
Bowon Kim, Youngsoo Kim, Eunjin Hwang, Jee Hyun Choi
P185 Brain-state correlate and cortical connectivity for frontal gamma oscillations in top-down fashion assessed by auditory steady-state response
Younginha Jung, Eunjin Hwang, Yoon-Kyu Song, Jee Hyun Choi
P186 Neural field model of localized orientation selective activation in V1
James Rankin, Frédéric Chavane
P187 An oscillatory network model of Head direction and Grid cells using locomotor inputs
Karthik Soman, Vignesh Muralidharan, V. Srinivasa Chakravarthy
P188 A computational model of hippocampus inspired by the functional architecture of basal ganglia
Karthik Soman, Vignesh Muralidharan, V. Srinivasa Chakravarthy
P189 A computational architecture to model the microanatomy of the striatum and its functional properties
Sabyasachi Shivkumar, Vignesh Muralidharan, V. Srinivasa Chakravarthy
P190 A scalable cortico-basal ganglia model to understand the neural dynamics of targeted reaching
Vignesh Muralidharan, Alekhya Mandali, B. Pragathi Priyadharsini, Hima Mehta, V. Srinivasa Chakravarthy
P191 Emergence of radial orientation selectivity from synaptic plasticity
Catherine E. Davey, David B. Grayden, Anthony N. Burkitt
P192 How do hidden units shape effective connections between neurons?
Braden A. W. Brinkman, Tyler Kekona, Fred Rieke, Eric Shea-Brown, Michael Buice
P193 Characterization of neural firing in the presence of astrocyte-synapse signaling
Maurizio De Pittà, Hugues Berry, Nicolas Brunel
P194 Metastability of spatiotemporal patterns in a large-scale network model of brain dynamics
James A. Roberts, Leonardo L. Gollo, Michael Breakspear
P195 Comparison of three methods to quantify detection and discrimination capacity estimated from neural population recordings
Gary Marsat, Jordan Drew, Phillip D. Chapman, Kevin C. Daly, Samual P. Bradley
P196 Quantifying the constraints for independent evoked and spontaneous NMDA receptor mediated synaptic transmission at individual synapses
Sat Byul Seo, Jianzhong Su, Ege T. Kavalali, Justin Blackwell
P199 Gamma oscillation via adaptive exponential integrate-and-fire neurons
LieJune Shiau, Laure Buhry, Kanishka Basnayake
P200 Visual face representations during memory retrieval compared to perception
Sue-Hyun Lee, Brandon A. Levy, Chris I. Baker
P201 Top-down modulation of sequential activity within packets modeled using avalanche dynamics
Timothée Leleu, Kazuyuki Aihara
Q28 An auto-encoder network realizes sparse features under the influence of desynchronized vascular dynamics
Ryan T. Philips, Karishma Chhabria, V. Srinivasa Chakravarthy
doi:10.1186/s12868-016-0283-6
PMCID: PMC5001212  PMID: 27534393
2.  Estimating Temporal Causal Interaction between Spike Trains with Permutation and Transfer Entropy 
PLoS ONE  2013;8(8):e70894.
Estimating the causal interaction between neurons is very important for better understanding the functional connectivity in neuronal networks. We propose a method called normalized permutation transfer entropy (NPTE) to evaluate the temporal causal interaction between spike trains, which quantifies the fraction of ordinal information in a neuron that has presented in another one. The performance of this method is evaluated with the spike trains generated by an Izhikevich’s neuronal model. Results show that the NPTE method can effectively estimate the causal interaction between two neurons without influence of data length. Considering both the precision of time delay estimated and the robustness of information flow estimated against neuronal firing rate, the NPTE method is superior to other information theoretic method including normalized transfer entropy, symbolic transfer entropy and permutation conditional mutual information. To test the performance of NPTE on analyzing simulated biophysically realistic synapses, an Izhikevich’s cortical network that based on the neuronal model is employed. It is found that the NPTE method is able to characterize mutual interactions and identify spurious causality in a network of three neurons exactly. We conclude that the proposed method can obtain more reliable comparison of interactions between different pairs of neurons and is a promising tool to uncover more details on the neural coding.
doi:10.1371/journal.pone.0070894
PMCID: PMC3733844  PMID: 23940662
3.  State-Space Analysis of Time-Varying Higher-Order Spike Correlation for Multiple Neural Spike Train Data 
PLoS Computational Biology  2012;8(3):e1002385.
Precise spike coordination between the spiking activities of multiple neurons is suggested as an indication of coordinated network activity in active cell assemblies. Spike correlation analysis aims to identify such cooperative network activity by detecting excess spike synchrony in simultaneously recorded multiple neural spike sequences. Cooperative activity is expected to organize dynamically during behavior and cognition; therefore currently available analysis techniques must be extended to enable the estimation of multiple time-varying spike interactions between neurons simultaneously. In particular, new methods must take advantage of the simultaneous observations of multiple neurons by addressing their higher-order dependencies, which cannot be revealed by pairwise analyses alone. In this paper, we develop a method for estimating time-varying spike interactions by means of a state-space analysis. Discretized parallel spike sequences are modeled as multi-variate binary processes using a log-linear model that provides a well-defined measure of higher-order spike correlation in an information geometry framework. We construct a recursive Bayesian filter/smoother for the extraction of spike interaction parameters. This method can simultaneously estimate the dynamic pairwise spike interactions of multiple single neurons, thereby extending the Ising/spin-glass model analysis of multiple neural spike train data to a nonstationary analysis. Furthermore, the method can estimate dynamic higher-order spike interactions. To validate the inclusion of the higher-order terms in the model, we construct an approximation method to assess the goodness-of-fit to spike data. In addition, we formulate a test method for the presence of higher-order spike correlation even in nonstationary spike data, e.g., data from awake behaving animals. The utility of the proposed methods is tested using simulated spike data with known underlying correlation dynamics. Finally, we apply the methods to neural spike data simultaneously recorded from the motor cortex of an awake monkey and demonstrate that the higher-order spike correlation organizes dynamically in relation to a behavioral demand.
Author Summary
Nearly half a century ago, the Canadian psychologist D. O. Hebb postulated the formation of assemblies of tightly connected cells in cortical recurrent networks because of changes in synaptic weight (Hebb's learning rule) by repetitive sensory stimulation of the network. Consequently, the activation of such an assembly for processing sensory or behavioral information is likely to be expressed by precisely coordinated spiking activities of the participating neurons. However, the available analysis techniques for multiple parallel neural spike data do not allow us to reveal the detailed structure of transiently active assemblies as indicated by their dynamical pairwise and higher-order spike correlations. Here, we construct a state-space model of dynamic spike interactions, and present a recursive Bayesian method that makes it possible to trace multiple neurons exhibiting such precisely coordinated spiking activities in a time-varying manner. We also formulate a hypothesis test of the underlying dynamic spike correlation, which enables us to detect the assemblies activated in association with behavioral events. Therefore, the proposed method can serve as a useful tool to test Hebb's cell assembly hypothesis.
doi:10.1371/journal.pcbi.1002385
PMCID: PMC3297562  PMID: 22412358
4.  Changing the responses of cortical neurons from sub- to suprathreshold using single spikes in vivo 
eLife  null;2:e00012.
Action Potential (APs) patterns of sensory cortex neurons encode a variety of stimulus features, but how can a neuron change the feature to which it responds? Here, we show that in vivo a spike-timing-dependent plasticity (STDP) protocol—consisting of pairing a postsynaptic AP with visually driven presynaptic inputs—modifies a neurons' AP-response in a bidirectional way that depends on the relative AP-timing during pairing. Whereas postsynaptic APs repeatedly following presynaptic activation can convert subthreshold into suprathreshold responses, APs repeatedly preceding presynaptic activation reduce AP responses to visual stimulation. These changes were paralleled by restructuring of the neurons response to surround stimulus locations and membrane-potential time-course. Computational simulations could reproduce the observed subthreshold voltage changes only when presynaptic temporal jitter was included. Together this shows that STDP rules can modify output patterns of sensory neurons and the timing of single-APs plays a crucial role in sensory coding and plasticity.
DOI: http://dx.doi.org/10.7554/eLife.00012.001
eLife digest
Nerve cells, called neurons, are one of the core components of the brain and form complex networks by connecting to other neurons via long, thin ‘wire-like’ processes called axons. Axons can extend across the brain, enabling neurons to form connections—or synapses—with thousands of others. It is through these complex networks that incoming information from sensory organs, such as the eye, is propagated through the brain and encoded.
The basic unit of communication between neurons is the action potential, often called a ‘spike’, which propagates along the network of axons and, through a chemical process at synapses, communicates with the postsynaptic neurons that the axon is connected to. These action potentials excite the neuron that they arrive at, and this excitatory process can generate a new action potential that then propagates along the axon to excite additional target neurons. In the visual areas of the cortex, neurons respond with action potentials when they ‘recognize’ a particular feature in a scene—a process called tuning. How a neuron becomes tuned to certain features in the world and not to others is unclear, as are the rules that enable a neuron to change what it is tuned to. What is clear, however, is that to understand this process is to understand the basis of sensory perception.
Memory storage and formation is thought to occur at synapses. The efficiency of signal transmission between neurons can increase or decrease over time, and this process is often referred to as synaptic plasticity. But for these synaptic changes to be transmitted to target neurons, the changes must alter the number of action potentials. Although it has been shown in vitro that the efficiency of synaptic transmission—that is the strength of the synapse—can be altered by changing the order in which the pre- and postsynaptic cells are activated (referred to as ‘Spike-timing-dependent plasticity’), this has never been shown to have an effect on the number of action potentials generated in a single neuron in vivo. It is therefore unknown whether this process is functionally relevant.
Now Pawlak et al. report that spike-timing-dependent plasticity in the visual cortex of anaesthetized rats can change the spiking of neurons in the visual cortex. They used a visual stimulus (a bar flashed up for half a second) to activate a presynaptic cell, and triggered a single action potential in the postsynaptic cell a very short time later. By repeatedly activating the cells in this way, they increased the strength of the synaptic connection between the two neurons. After a small number of these pairing activations, presenting the visual stimulus alone to the presynaptic cell was enough to trigger an action potential (a suprathreshold response) in the postsynaptic neuron—even though this was not the case prior to the pairing.
This study shows that timing rules known to change the strength of synaptic connections—and proposed to underlie learning and memory—have functional relevance in vivo, and that the timing of single action potentials can change the functional status of a cortical neuron.
DOI: http://dx.doi.org/10.7554/eLife.00012.002
doi:10.7554/eLife.00012
PMCID: PMC3552422  PMID: 23359858
synaptic plasticity; STDP; visual cortex; circuits; in vivo; spiking patterns; rat
5.  Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons 
PLoS Computational Biology  2015;11(12):e1004566.
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field’s Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.
Author Summary
In the brain areas responsible for sensory processing, neurons learn over time to respond to specific features in the external world. Here, we propose a new, biologically plausible model for how groups of neurons can learn which specific features to respond to. Our work connects theoretical arguments about the optimal forms of neuronal representations with experimental results showing how synaptic connections change in response to neuronal activity. Specifically, we show that biologically realistic neurons can implement an algorithm known as autoencoder learning, in which the neurons learn to form representations that can be used to reconstruct their inputs. Autoencoder networks can successfully model neuronal responses in early sensory areas, and they are also frequently used in machine learning for training deep neural networks. Despite their power and utility, autoencoder networks have not been previously implemented in a fully biological fashion. To perform the autoencoder algorithm, neurons must modify their incoming, feedforward synaptic connections as well as their outgoing, feedback synaptic connections—and the changes to both must depend on the errors the network makes when it tries to reconstruct its input. Here, we propose a model for activity in the network and show that the commonly used spike-timing-dependent plasticity paradigm will implement the desired changes to feedforward synaptic connection weights. Critically, we use recent experimental evidence to propose that feedback connections learn according to a temporally reversed plasticity rule. We show mathematically that the two rules combined can approximately implement autoencoder learning, and confirm our results using simulated networks of integrate-and-fire neurons. By showing that biological neurons can implement this powerful algorithm, our work opens the door for the modeling of many learning paradigms from both the fields of computational neuroscience and machine learning.
doi:10.1371/journal.pcbi.1004566
PMCID: PMC4669146  PMID: 26633645
6.  Delay Selection by Spike-Timing-Dependent Plasticity in Recurrent Networks of Spiking Neurons Receiving Oscillatory Inputs 
PLoS Computational Biology  2013;9(2):e1002897.
Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem.
Author Summary
Our brain's ability to perform cognitive processes, such as object identification, problem solving, and decision making, comes from the specific connections between neurons. The neurons carry information as spikes that are transmitted to other neurons via connections with different strengths and propagation delays. Experimentally observed learning rules can modify the strengths of connections between neurons based on the timing of their spikes. The learning that occurs in neuronal networks due to these rules is thought to be vital to creating the structures necessary for different cognitive processes as well as for memory. The spiking rate of populations of neurons has been observed to oscillate at particular frequencies in various brain regions, and there is evidence that these oscillations play a role in cognition. Here, we use analytical and numerical methods to investigate the changes to the network structure caused by a specific learning rule during oscillatory neural activity. We find the conditions under which connections with propagation delays that resonate with the oscillations are strengthened relative to the other connections. We demonstrate that networks learn to oscillate more strongly to oscillations at the frequency they were presented with during learning. We discuss the possible application of these results to specific areas of the brain.
doi:10.1371/journal.pcbi.1002897
PMCID: PMC3567188  PMID: 23408878
7.  Rapid learning in visual cortical networks 
eLife  null;4:e08417.
Although changes in brain activity during learning have been extensively examined at the single neuron level, the coding strategies employed by cell populations remain mysterious. We examined cell populations in macaque area V4 during a rapid form of perceptual learning that emerges within tens of minutes. Multiple single units and LFP responses were recorded as monkeys improved their performance in an image discrimination task. We show that the increase in behavioral performance during learning is predicted by a tight coordination of spike timing with local population activity. More spike-LFP theta synchronization is correlated with higher learning performance, while high-frequency synchronization is unrelated with changes in performance, but these changes were absent once learning had stabilized and stimuli became familiar, or in the absence of learning. These findings reveal a novel mechanism of plasticity in visual cortex by which elevated low-frequency synchronization between individual neurons and local population activity accompanies the improvement in performance during learning.
DOI: http://dx.doi.org/10.7554/eLife.08417.001
eLife digest
Throughout life, we learn and become better at many skills through repeated practice. However, how the brain cells enable us to adapt to changes in the environment and improve cognitive performance is poorly understood.
The activity of a neuron can be recorded as a ‘spike’ of electrical activity. In the nervous system, neurons work together in networks. If a group of neurons fire in a synchronized manner, waves of activity may be recorded from that brain region. One important issue in neuroscience is whether the spikes of individual neurons are synchronized with the local network activity. Indeed, it is generally believed that it is functionally important for individual cells to synchronize their responses to the waves of population activity.
The vast majority of studies aimed at understanding the behavior of neurons during learning have only recorded the activity of single neurons. This activity does not change much during learning, which suggests that learning may instead be encoded by the combined activity of a group of neurons. However, it is difficult to examine the same population of neurons as an animal practices and improves a skill. This is because the learning process typically takes longer than the length of time for which a single cell can be held in a stable condition and recorded from.
To overcome these limitations, Wang and Dragoi briefly flashed images at monkeys and trained them to report when the images have been rotated. Monkeys learn to do this within a single-training session, which allows the responses of the same group of neurons—found in a part of the brain called the mid-level visual cortex—to be recorded throughout the learning process.
Wang and Dragoi found that the improvement in behavioral performance during learning was accompanied by a tight synchronization between the spikes produced by individual neurons and the activity of groups of cells within a specific low-frequency band. This low-frequency activity had previously been linked to changes in the strength of functional connections between neurons in the hippocampus, which may be important for learning. The more synchronized this neural activity was, the better the monkeys were at the task. However, changes to the synchronization of spiking responses to local population activity in the higher frequency bands were unrelated to changes in performance. The changes to the level of synchronization were abolished once learning had stabilized and stimuli had become familiar.
Although Wang and Dragoi have found that the mid-level visual cortex neurons fire in a more synchronized way throughout learning, it remains to be confirmed whether these changes in synchronization are causally related to learning. Future studies could test whether this is the case by electrically or optically stimulating neurons so that their activity synchronizes with the local population activity, and investigating whether this manipulation improves learning ability.
DOI: http://dx.doi.org/10.7554/eLife.08417.002
doi:10.7554/eLife.08417
PMCID: PMC4588715  PMID: 26308578
monkey; learning; visual cortex; oscillations; populations; other
8.  Effects of Neural Morphology and Input Distribution on Synaptic Processing by Global and Focal NMDA-Spikes 
PLoS ONE  2015;10(10):e0140254.
Cortical neurons can respond to glutamatergic stimulation with regenerative N-Methyl-D-aspartic acid (NMDA)-spikes. NMDA-spikes were initially thought to depend on clustered synaptic activation. Recent work had shown however a new variety of a global NMDA-spike, which can be generated by randomly distributed inputs. Very little is known about the factors that influence the generation of these global NMDA-spikes, as well the potentially distinct rules of synaptic integration and the computational significance conferred by the two types of NMDA-spikes. Here I show that the input resistance (RIN) plays a major role in influencing spike initiation; while the classical, focal NMDA-spike depended upon the local (dendritic) RIN, the threshold of global NMDA-spike generation was set by the somatic RIN. As cellular morphology can exert a large influence on RIN, morphologically distinct neuron types can have dissimilar rules for NMDA-spikes generation. For example, cortical neurons in superficial layers were found to be generally prone to global NMDA-spike generation. In contrast, electric properties of cortical layer 5b cells clearly favor focal NMDA-spikes. These differences can translate into diverse synaptic integration rules for the different classes of cortical cells; simulated superficial layers neurons were found to exhibit strong synaptic interactions between different dendritic branches, giving rise to a single integrative compartment mediated by the global NMDA-spike. In these cells, efficiency of postsynaptic activation was relatively little dependent on synaptic distribution. By contrast, layer 5b neurons were capable of true multi-unit computation involving independent integrative compartments formed by clustered synaptic input which could trigger focal NMDA-spikes. In a sharp contrast to superficial layers neurons, randomly distributed synaptic inputs were not very effective in driving firing the layer 5b cells, indicating a possibility for different computation performed by these important cortical neurons.
doi:10.1371/journal.pone.0140254
PMCID: PMC4604166  PMID: 26460829
9.  Dynamic Effective Connectivity of Inter-Areal Brain Circuits 
PLoS Computational Biology  2012;8(3):e1002438.
Anatomic connections between brain areas affect information flow between neuronal circuits and the synchronization of neuronal activity. However, such structural connectivity does not coincide with effective connectivity (or, more precisely, causal connectivity), related to the elusive question “Which areas cause the present activity of which others?”. Effective connectivity is directed and depends flexibly on contexts and tasks. Here we show that dynamic effective connectivity can emerge from transitions in the collective organization of coherent neural activity. Integrating simulation and semi-analytic approaches, we study mesoscale network motifs of interacting cortical areas, modeled as large random networks of spiking neurons or as simple rate units. Through a causal analysis of time-series of model neural activity, we show that different dynamical states generated by a same structural connectivity motif correspond to distinct effective connectivity motifs. Such effective motifs can display a dominant directionality, due to spontaneous symmetry breaking and effective entrainment between local brain rhythms, although all connections in the considered structural motifs are reciprocal. We show then that transitions between effective connectivity configurations (like, for instance, reversal in the direction of inter-areal interactions) can be triggered reliably by brief perturbation inputs, properly timed with respect to an ongoing local oscillation, without the need for plastic synaptic changes. Finally, we analyze how the information encoded in spiking patterns of a local neuronal population is propagated across a fixed structural connectivity motif, demonstrating that changes in the active effective connectivity regulate both the efficiency and the directionality of information transfer. Previous studies stressed the role played by coherent oscillations in establishing efficient communication between distant areas. Going beyond these early proposals, we advance here that dynamic interactions between brain rhythms provide as well the basis for the self-organized control of this “communication-through-coherence”, making thus possible a fast “on-demand” reconfiguration of global information routing modalities.
Author Summary
The circuits of the brain must perform a daunting amount of functions. But how can “brain states” be flexibly controlled, given that anatomic inter-areal connections can be considered as fixed, on timescales relevant for behavior? We hypothesize that, thanks to the nonlinear interaction between brain rhythms, even a simple circuit involving few brain areas can originate a multitude of effective circuits, associated with alternative functions selectable “on demand”. A distinction is usually made between structural connectivity, which describes actual synaptic connections, and effective connectivity, quantifying, beyond correlation, directed inter-areal causal influences. In our study, we measure effective connectivity based on time-series of neural activity generated by model inter-areal circuits. We find that “causality follows dynamics”. We show indeed that different effective networks correspond to different dynamical states associated to a same structural network (in particular, different phase-locking patterns between local neuronal oscillations). We then find that “information follows causality” (and thus, again, dynamics). We demonstrate that different effective networks give rise to alternative modalities of information routing between brain areas wired together in a fixed structural network. In particular, we show that the self-organization of interacting “analog” rate oscillations control the flow of “digital-like” information encoded in complex spiking patterns.
doi:10.1371/journal.pcbi.1002438
PMCID: PMC3310731  PMID: 22457614
10.  Rich-Club Organization in Effective Connectivity among Cortical Neurons 
The Journal of Neuroscience  2016;36(3):670-684.
The performance of complex networks, like the brain, depends on how effectively their elements communicate. Despite the importance of communication, it is virtually unknown how information is transferred in local cortical networks, consisting of hundreds of closely spaced neurons. To address this, it is important to record simultaneously from hundreds of neurons at a spacing that matches typical axonal connection distances, and at a temporal resolution that matches synaptic delays. We used a 512-electrode array (60 μm spacing) to record spontaneous activity at 20 kHz from up to 500 neurons simultaneously in slice cultures of mouse somatosensory cortex for 1 h at a time. We applied a previously validated version of transfer entropy to quantify information transfer. Similar to in vivo reports, we found an approximately lognormal distribution of firing rates. Pairwise information transfer strengths also were nearly lognormally distributed, similar to reports of synaptic strengths. Some neurons transferred and received much more information than others, which is consistent with previous predictions. Neurons with the highest outgoing and incoming information transfer were more strongly connected to each other than chance, thus forming a “rich club.” We found similar results in networks recorded in vivo from rodent cortex, suggesting the generality of these findings. A rich-club structure has been found previously in large-scale human brain networks and is thought to facilitate communication between cortical regions. The discovery of a small, but information-rich, subset of neurons within cortical regions suggests that this population will play a vital role in communication, learning, and memory.
SIGNIFICANCE STATEMENT Many studies have focused on communication networks between cortical brain regions. In contrast, very few studies have examined communication networks within a cortical region. This is the first study to combine such a large number of neurons (several hundred at a time) with such high temporal resolution (so we can know the direction of communication between neurons) for mapping networks within cortex. We found that information was not transferred equally through all neurons. Instead, ∼70% of the information passed through only 20% of the neurons. Network models suggest that this highly concentrated pattern of information transfer would be both efficient and robust to damage. Therefore, this work may help in understanding how the cortex processes information and responds to neurodegenerative diseases.
doi:10.1523/JNEUROSCI.2177-15.2016
PMCID: PMC4719009  PMID: 26791200
effective connectivity; information transfer; microcircuits; rich club; transfer entropy
11.  Cerebellar Nuclear Neurons Use Time and Rate Coding to Transmit Purkinje Neuron Pauses 
PLoS Computational Biology  2015;11(12):e1004641.
Neurons of the cerebellar nuclei convey the final output of the cerebellum to their targets in various parts of the brain. Within the cerebellum their direct upstream connections originate from inhibitory Purkinje neurons. Purkinje neurons have a complex firing pattern of regular spikes interrupted by intermittent pauses of variable length. How can the cerebellar nucleus process this complex input pattern? In this modeling study, we investigate different forms of Purkinje neuron simple spike pause synchrony and its influence on candidate coding strategies in the cerebellar nuclei. That is, we investigate how different alignments of synchronous pauses in synthetic Purkinje neuron spike trains affect either time-locking or rate-changes in the downstream nuclei. We find that Purkinje neuron synchrony is mainly represented by changes in the firing rate of cerebellar nuclei neurons. Pause beginning synchronization produced a unique effect on nuclei neuron firing, while the effect of pause ending and pause overlapping synchronization could not be distinguished from each other. Pause beginning synchronization produced better time-locking of nuclear neurons for short length pauses. We also characterize the effect of pause length and spike jitter on the nuclear neuron firing. Additionally, we find that the rate of rebound responses in nuclear neurons after a synchronous pause is controlled by the firing rate of Purkinje neurons preceding it.
Author Summary
Neurons can transmit information by two different coding strategies: Rate coding, where the firing rate of the neuron is vital, and time coding where timing of individual spikes carries relevant information. In this study we analyze the importance of brief cessations in firing of the presynaptic neuron (pauses) on the spiking of the postsynaptic neuron. We perform this analysis on the inhibitory synaptic connection between Purkinje neurons (presynaptic) and nuclear neurons (postsynaptic) of the cerebellum. We employ a computational model of nuclear neurons and “synthetic” Purkinje neuron spike trains to study the effect of synchronous pauses on the spiking responses of nuclear neurons. We find that synchronous pauses can cause both well-timed spikes and increased firing rate in the nuclear neuron. In addition, we characterize the effect of pause length, amount and type of pause synchrony, and spike jitter. As such, we conclude that nuclear cells use both rate and time coding to relay upstream spiking information.
doi:10.1371/journal.pcbi.1004641
PMCID: PMC4668013  PMID: 26630202
12.  High-Degree Neurons Feed Cortical Computations 
PLoS Computational Biology  2016;12(5):e1004858.
Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree) or sends out (out-degree). To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series) and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts) to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to which a neuron modifies incoming information streams depends on its topological location in the surrounding functional network.
Author Summary
We recorded the electrical activity of hundreds of neurons simultaneously in brain tissue from mice and we analyzed these signals using state-of-the-art tools from information theory. These tools allowed us to ascertain which neurons were transmitting information to other neurons and to characterize the computations performed by neurons using the inputs they received from two or more other neurons. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to be recipients of information from neurons with a large number of outgoing connections. Interestingly, the number of incoming connections to a neuron was not related to the amount of information that neuron computed. To better understand these results, we built a network model to match the data. Unexpectedly, the model also maximized information transfer in the presence of network-wide correlations. This suggested a way that networks of cortical neurons could deal with common random background input. These results are the first to show that the amount of information computed by a neuron depends on where it is located in the surrounding network.
doi:10.1371/journal.pcbi.1004858
PMCID: PMC4861348  PMID: 27159884
13.  Model-Free Reconstruction of Excitatory Neuronal Connectivity from Calcium Imaging Signals 
PLoS Computational Biology  2012;8(8):e1002653.
A systematic assessment of global neural network connectivity through direct electrophysiological assays has remained technically infeasible, even in simpler systems like dissociated neuronal cultures. We introduce an improved algorithmic approach based on Transfer Entropy to reconstruct structural connectivity from network activity monitored through calcium imaging. We focus in this study on the inference of excitatory synaptic links. Based on information theory, our method requires no prior assumptions on the statistics of neuronal firing and neuronal connections. The performance of our algorithm is benchmarked on surrogate time series of calcium fluorescence generated by the simulated dynamics of a network with known ground-truth topology. We find that the functional network topology revealed by Transfer Entropy depends qualitatively on the time-dependent dynamic state of the network (bursting or non-bursting). Thus by conditioning with respect to the global mean activity, we improve the performance of our method. This allows us to focus the analysis to specific dynamical regimes of the network in which the inferred functional connectivity is shaped by monosynaptic excitatory connections, rather than by collective synchrony. Our method can discriminate between actual causal influences between neurons and spurious non-causal correlations due to light scattering artifacts, which inherently affect the quality of fluorescence imaging. Compared to other reconstruction strategies such as cross-correlation or Granger Causality methods, our method based on improved Transfer Entropy is remarkably more accurate. In particular, it provides a good estimation of the excitatory network clustering coefficient, allowing for discrimination between weakly and strongly clustered topologies. Finally, we demonstrate the applicability of our method to analyses of real recordings of in vitro disinhibited cortical cultures where we suggest that excitatory connections are characterized by an elevated level of clustering compared to a random graph (although not extreme) and can be markedly non-local.
Author Summary
Unraveling the general organizing principles of connectivity in neural circuits is a crucial step towards understanding brain function. However, even the simpler task of assessing the global excitatory connectivity of a culture in vitro, where neurons form self-organized networks in absence of external stimuli, remains challenging. Neuronal cultures undergo spontaneous switching between episodes of synchronous bursting and quieter inter-burst periods. We introduce here a novel algorithm which aims at inferring the connectivity of neuronal cultures from calcium fluorescence recordings of their network dynamics. To achieve this goal, we develop a suitable generalization of Transfer Entropy, an information-theoretic measure of causal influences between time series. Unlike previous algorithmic approaches to reconstruction, Transfer Entropy is data-driven and does not rely on specific assumptions about neuronal firing statistics or network topology. We generate simulated calcium signals from networks with controlled ground-truth topology and purely excitatory interactions and show that, by restricting the analysis to inter-bursts periods, Transfer Entropy robustly achieves a good reconstruction performance for disparate network connectivities. Finally, we apply our method to real data and find evidence of non-random features in cultured networks, such as the existence of highly connected hub excitatory neurons and of an elevated (but not extreme) level of clustering.
doi:10.1371/journal.pcbi.1002653
PMCID: PMC3426566  PMID: 22927808
14.  Complex Events Initiated by Individual Spikes in the Human Cerebral Cortex  
PLoS Biology  2008;6(9):e222.
Synaptic interactions between neurons of the human cerebral cortex were not directly studied to date. We recorded the first dataset, to our knowledge, on the synaptic effect of identified human pyramidal cells on various types of postsynaptic neurons and reveal complex events triggered by individual action potentials in the human neocortical network. Brain slices were prepared from nonpathological samples of cortex that had to be removed for the surgical treatment of brain areas beneath association cortices of 58 patients aged 18 to 73 y. Simultaneous triple and quadruple whole-cell patch clamp recordings were performed testing mono- and polysynaptic potentials in target neurons following a single action potential fired by layer 2/3 pyramidal cells, and the temporal structure of events and underlying mechanisms were analyzed. In addition to monosynaptic postsynaptic potentials, individual action potentials in presynaptic pyramidal cells initiated long-lasting (37 ± 17 ms) sequences of events in the network lasting an order of magnitude longer than detected previously in other species. These event series were composed of specifically alternating glutamatergic and GABAergic postsynaptic potentials and required selective spike-to-spike coupling from pyramidal cells to GABAergic interneurons producing concomitant inhibitory as well as excitatory feed-forward action of GABA. Single action potentials of human neurons are sufficient to recruit Hebbian-like neuronal assemblies that are proposed to participate in cognitive processes.
Author Summary
We recorded the first connections, to our knowledge, between human nerve cells and reveal that a subset of interactions is so strong that some presynaptic cells are capable of eliciting action potentials in the postsynaptic target neurons. Interestingly, these strong connections selectively link pyramidal cells using the neurotransmitter glutamate to neurons releasing gamma aminobutyric acid (GABA). Moreover, the GABAergic neurons receiving the strong connections include different types: basket cells, which inhibit several target cell populations, and another type called the chandelier cells, which can be excitatory and target pyramidal cells only. Thus, the activation originating from a single pyramidal cell propagates to synchronously working inhibitory and excitatory GABAergic neurons. Inhibition then arrives to various neuron classes, but excitation finds only pyramidal cells, which in turn, can propagate excitation even further in the network of neurons. This chain of events revealed here leads to network activation approximately an order of magnitude longer than detected previously in response to a single action potential in a single neuron. Individual-neuron–activated groups of neurons resemble the so-called functional assemblies that were proposed as building blocks of higher order cognitive representations.
A novel study on connections between human neurons reveals that single spikes in pyramidal cells can activate synchronously timed assemblies through strong connections linking pyramidal cells with inhibitory and excitatory GABAergic neurons.
doi:10.1371/journal.pbio.0060222
PMCID: PMC2528052  PMID: 18767905
15.  Network Self-Organization Explains the Statistics and Dynamics of Synaptic Connection Strengths in Cortex 
PLoS Computational Biology  2013;9(1):e1002848.
The information processing abilities of neural circuits arise from their synaptic connection patterns. Understanding the laws governing these connectivity patterns is essential for understanding brain function. The overall distribution of synaptic strengths of local excitatory connections in cortex and hippocampus is long-tailed, exhibiting a small number of synaptic connections of very large efficacy. At the same time, new synaptic connections are constantly being created and individual synaptic connection strengths show substantial fluctuations across time. It remains unclear through what mechanisms these properties of neural circuits arise and how they contribute to learning and memory. In this study we show that fundamental characteristics of excitatory synaptic connections in cortex and hippocampus can be explained as a consequence of self-organization in a recurrent network combining spike-timing-dependent plasticity (STDP), structural plasticity and different forms of homeostatic plasticity. In the network, associative synaptic plasticity in the form of STDP induces a rich-get-richer dynamics among synapses, while homeostatic mechanisms induce competition. Under distinctly different initial conditions, the ensuing self-organization produces long-tailed synaptic strength distributions matching experimental findings. We show that this self-organization can take place with a purely additive STDP mechanism and that multiplicative weight dynamics emerge as a consequence of network interactions. The observed patterns of fluctuation of synaptic strengths, including elimination and generation of synaptic connections and long-term persistence of strong connections, are consistent with the dynamics of dendritic spines found in rat hippocampus. Beyond this, the model predicts an approximately power-law scaling of the lifetimes of newly established synaptic connection strengths during development. Our results suggest that the combined action of multiple forms of neuronal plasticity plays an essential role in the formation and maintenance of cortical circuits.
Author Summary
The computations that brain circuits can perform depend on their wiring. While a wiring diagram is still out of reach for major brain structures such as the neocortex and hippocampus, data on the overall distribution of synaptic connection strengths and the temporal fluctuations of individual synapses have recently become available. Specifically, there exists a small population of very strong and stable synaptic connections, which may form the physiological substrate of life-long memories. This population coexists with a big and ever changing population of much smaller and strongly fluctuating synaptic connections. So far it has remained unclear how these properties of networks in neocortex and hippocampus arise. Here we present a computational model that explains these fundamental properties of neural circuits as a consequence of network self-organization resulting from the combined action of different forms of neuronal plasticity. This self-organization is driven by a rich-get-richer effect induced by an associative synaptic learning mechanism which is kept in check by several homeostatic plasticity mechanisms stabilizing the network. The model highlights the role of self-organization in the formation of brain circuits and parsimoniously explains a range of recent findings about their fundamental properties.
doi:10.1371/journal.pcbi.1002848
PMCID: PMC3536614  PMID: 23300431
16.  Efficient "Shotgun" Inference of Neural Connectivity from Highly Sub-sampled Activity Data 
PLoS Computational Biology  2015;11(10):e1004464.
Inferring connectivity in neuronal networks remains a key challenge in statistical neuroscience. The “common input” problem presents a major roadblock: it is difficult to reliably distinguish causal connections between pairs of observed neurons versus correlations induced by common input from unobserved neurons. Available techniques allow us to simultaneously record, with sufficient temporal resolution, only a small fraction of the network. Consequently, naive connectivity estimators that neglect these common input effects are highly biased. This work proposes a “shotgun” experimental design, in which we observe multiple sub-networks briefly, in a serial manner. Thus, while the full network cannot be observed simultaneously at any given time, we may be able to observe much larger subsets of the network over the course of the entire experiment, thus ameliorating the common input problem. Using a generalized linear model for a spiking recurrent neural network, we develop a scalable approximate expected loglikelihood-based Bayesian method to perform network inference given this type of data, in which only a small fraction of the network is observed in each time bin. We demonstrate in simulation that the shotgun experimental design can eliminate the biases induced by common input effects. Networks with thousands of neurons, in which only a small fraction of the neurons is observed in each time bin, can be quickly and accurately estimated, achieving orders of magnitude speed up over previous approaches.
Author Summary
Optical imaging of the activity in a neuronal network is limited by the scanning speed of the imaging device. Therefore, typically, only a small fixed part of the network is observed during the entire experiment. However, in such an experiment, it can be hard to infer from the observed activity patterns whether (1) a neuron A directly affects neuron B, or (2) another, unobserved neuron C affects both A and B. To deal with this issue, we propose a “shotgun” observation scheme, in which, at each time point, we observe a small changing subset of the neurons from the network. Consequently, many fewer neurons remain completely unobserved during the entire experiment, enabling us to eventually distinguish between cases (1) and (2) given sufficiently long experiments. Since previous inference algorithms cannot efficiently handle so many missing observations, we develop a scalable algorithm for data acquired using the shotgun observation scheme, in which only a small fraction of the neurons are observed in each time bin. Using this kind of simulated data, we show the algorithm is able to quickly infer connectivity in spiking recurrent networks with thousands of neurons.
doi:10.1371/journal.pcbi.1004464
PMCID: PMC4605541  PMID: 26465147
17.  Phase-Coherence Transitions and Communication in the Gamma Range between Delay-Coupled Neuronal Populations 
PLoS Computational Biology  2014;10(7):e1003723.
Synchronization between neuronal populations plays an important role in information transmission between brain areas. In particular, collective oscillations emerging from the synchronized activity of thousands of neurons can increase the functional connectivity between neural assemblies by coherently coordinating their phases. This synchrony of neuronal activity can take place within a cortical patch or between different cortical regions. While short-range interactions between neurons involve just a few milliseconds, communication through long-range projections between different regions could take up to tens of milliseconds. How these heterogeneous transmission delays affect communication between neuronal populations is not well known. To address this question, we have studied the dynamics of two bidirectionally delayed-coupled neuronal populations using conductance-based spiking models, examining how different synaptic delays give rise to in-phase/anti-phase transitions at particular frequencies within the gamma range, and how this behavior is related to the phase coherence between the two populations at different frequencies. We have used spectral analysis and information theory to quantify the information exchanged between the two networks. For different transmission delays between the two coupled populations, we analyze how the local field potential and multi-unit activity calculated from one population convey information in response to a set of external inputs applied to the other population. The results confirm that zero-lag synchronization maximizes information transmission, although out-of-phase synchronization allows for efficient communication provided the coupling delay, the phase lag between the populations, and the frequency of the oscillations are properly matched.
Author Summary
The correct operation of the brain requires a carefully orchestrated activity, which includes the establishment of synchronized behavior among multiple neuronal populations. Synchronization of collective neuronal oscillations, in particular, has been suggested to mediate communication between brain areas, with the global oscillations acting as “information carriers” on which signals encoding specific stimuli or brain states are superimposed. But neuronal signals travel at finite speeds across the brain, thus leading to a wide range of delays in the coupling between neuronal populations. How the brain reaches the required level of coordination in the presence of such delays is still unclear. Here we approach this question in the case of two delay-coupled neuronal populations exhibiting collective oscillations in the gamma range. Our results show that effective communication can be reached even in the presence of relatively large delays between the populations, which self-organize in either in-phase or anti-phase synchronized states. In those states the transmission delays, phase difference, and oscillation frequency match to allow for communication at a wide range of coupling delays between brain areas.
doi:10.1371/journal.pcbi.1003723
PMCID: PMC4110076  PMID: 25058021
18.  Spike-Based Population Coding and Working Memory 
PLoS Computational Biology  2011;7(2):e1001080.
Compelling behavioral evidence suggests that humans can make optimal decisions despite the uncertainty inherent in perceptual or motor tasks. A key question in neuroscience is how populations of spiking neurons can implement such probabilistic computations. In this article, we develop a comprehensive framework for optimal, spike-based sensory integration and working memory in a dynamic environment. We propose that probability distributions are inferred spike-per-spike in recurrently connected networks of integrate-and-fire neurons. As a result, these networks can combine sensory cues optimally, track the state of a time-varying stimulus and memorize accumulated evidence over periods much longer than the time constant of single neurons. Importantly, we propose that population responses and persistent working memory states represent entire probability distributions and not only single stimulus values. These memories are reflected by sustained, asynchronous patterns of activity which make relevant information available to downstream neurons within their short time window of integration. Model neurons act as predictive encoders, only firing spikes which account for new information that has not yet been signaled. Thus, spike times signal deterministically a prediction error, contrary to rate codes in which spike times are considered to be random samples of an underlying firing rate. As a consequence of this coding scheme, a multitude of spike patterns can reliably encode the same information. This results in weakly correlated, Poisson-like spike trains that are sensitive to initial conditions but robust to even high levels of external neural noise. This spike train variability reproduces the one observed in cortical sensory spike trains, but cannot be equated to noise. On the contrary, it is a consequence of optimal spike-based inference. In contrast, we show that rate-based models perform poorly when implemented with stochastically spiking neurons.
Author Summary
Most of our daily actions are subject to uncertainty. Behavioral studies have confirmed that humans handle this uncertainty in a statistically optimal manner. A key question then is what neural mechanisms underlie this optimality, i.e. how can neurons represent and compute with probability distributions. Previous approaches have proposed that probabilities are encoded in the firing rates of neural populations. However, such rate codes appear poorly suited to understand perception in a constantly changing environment. In particular, it is unclear how probabilistic computations could be implemented by biologically plausible spiking neurons. Here, we propose a network of spiking neurons that can optimally combine uncertain information from different sensory modalities and keep this information available for a long time. This implies that neural memories not only represent the most likely value of a stimulus but rather a whole probability distribution over it. Furthermore, our model suggests that each spike conveys new, essential information. Consequently, the observed variability of neural responses cannot simply be understood as noise but rather as a necessary consequence of optimal sensory integration. Our results therefore question strongly held beliefs about the nature of neural “signal” and “noise”.
doi:10.1371/journal.pcbi.1001080
PMCID: PMC3040643  PMID: 21379319
19.  Spectral Analysis of Input Spike Trains by Spike-Timing-Dependent Plasticity 
PLoS Computational Biology  2012;8(7):e1002584.
Spike-timing-dependent plasticity (STDP) has been observed in many brain areas such as sensory cortices, where it is hypothesized to structure synaptic connections between neurons. Previous studies have demonstrated how STDP can capture spiking information at short timescales using specific input configurations, such as coincident spiking, spike patterns and oscillatory spike trains. However, the corresponding computation in the case of arbitrary input signals is still unclear. This paper provides an overarching picture of the algorithm inherent to STDP, tying together many previous results for commonly used models of pairwise STDP. For a single neuron with plastic excitatory synapses, we show how STDP performs a spectral analysis on the temporal cross-correlograms between its afferent spike trains. The postsynaptic responses and STDP learning window determine kernel functions that specify how the neuron “sees” the input correlations. We thus denote this unsupervised learning scheme as ‘kernel spectral component analysis’ (kSCA). In particular, the whole input correlation structure must be considered since all plastic synapses compete with each other. We find that kSCA is enhanced when weight-dependent STDP induces gradual synaptic competition. For a spiking neuron with a “linear” response and pairwise STDP alone, we find that kSCA resembles principal component analysis (PCA). However, plain STDP does not isolate correlation sources in general, e.g., when they are mixed among the input spike trains. In other words, it does not perform independent component analysis (ICA). Tuning the neuron to a single correlation source can be achieved when STDP is paired with a homeostatic mechanism that reinforces the competition between synaptic inputs. Our results suggest that neuronal networks equipped with STDP can process signals encoded in the transient spiking activity at the timescales of tens of milliseconds for usual STDP.
Author Summary
Tuning feature extraction of sensory stimuli is an important function for synaptic plasticity models. A widely studied example is the development of orientation preference in the primary visual cortex, which can emerge using moving bars in the visual field. A crucial point is the decomposition of stimuli into basic information tokens, e.g., selecting individual bars even though they are presented in overlapping pairs (vertical and horizontal). Among classical unsupervised learning models, independent component analysis (ICA) is capable of isolating basic tokens, whereas principal component analysis (PCA) cannot. This paper focuses on spike-timing-dependent plasticity (STDP), whose functional implications for neural information processing have been intensively studied both theoretically and experimentally in the last decade. Following recent studies demonstrating that STDP can perform ICA for specific cases, we show how STDP relates to PCA or ICA, and in particular explains the conditions under which it switches between them. Here information at the neuronal level is assumed to be encoded in temporal cross-correlograms of spike trains. We find that a linear spiking neuron equipped with pairwise STDP requires additional mechanisms, such as a homeostatic regulation of its output firing, in order to separate mixed correlation sources and thus perform ICA.
doi:10.1371/journal.pcbi.1002584
PMCID: PMC3390410  PMID: 22792056
20.  Effects of Calcium Spikes in the Layer 5 Pyramidal Neuron on Coincidence Detection and Activity Propagation 
The role of dendritic spiking mechanisms in neural processing is so far poorly understood. To investigate the role of calcium spikes in the functional properties of the single neuron and recurrent networks, we investigated a three compartment neuron model of the layer 5 pyramidal neuron with calcium dynamics in the distal compartment. By performing single neuron simulations with noisy synaptic input and occasional large coincident input at either just the distal compartment or at both somatic and distal compartments, we show that the presence of calcium spikes confers a substantial advantage for coincidence detection in the former case and a lesser advantage in the latter. We further show that the experimentally observed critical frequency phenomenon, in which action potentials triggered by stimuli near the soma above a certain frequency trigger a calcium spike at distal dendrites, leading to further somatic depolarization, is not exhibited by a neuron receiving realistically noisy synaptic input, and so is unlikely to be a necessary component of coincidence detection. We next investigate the effect of calcium spikes in propagation of spiking activities in a feed-forward network (FFN) embedded in a balanced recurrent network. The excitatory neurons in the network are again connected to either just the distal, or both somatic and distal compartments. With purely distal connectivity, activity propagation is stable and distinguishable for a large range of recurrent synaptic strengths if the feed-forward connections are sufficiently strong, but propagation does not occur in the absence of calcium spikes. When connections are made to both the somatic and the distal compartments, activity propagation is achieved for neurons with active calcium dynamics at a much smaller number of neurons per pool, compared to a network of passive neurons, but quickly becomes unstable as the strength of recurrent synapses increases. Activity propagation at higher scaling factors can be stabilized by increasing network inhibition or introducing short term depression in the excitatory synapses, but the signal to noise ratio remains low. Our results demonstrate that the interaction of synchrony with dendritic spiking mechanisms can have profound consequences for the dynamics on the single neuron and network level.
doi:10.3389/fncom.2016.00076
PMCID: PMC4957534  PMID: 27499740
calcium spikes; layer 5 pyramidal neurons; coincidence detection; activity propagation; synfire chains; detailed balance; short term plasticity
21.  Successful Reconstruction of a Physiological Circuit with Known Connectivity from Spiking Activity Alone 
PLoS Computational Biology  2013;9(7):e1003138.
Identifying the structure and dynamics of synaptic interactions between neurons is the first step to understanding neural network dynamics. The presence of synaptic connections is traditionally inferred through the use of targeted stimulation and paired recordings or by post-hoc histology. More recently, causal network inference algorithms have been proposed to deduce connectivity directly from electrophysiological signals, such as extracellularly recorded spiking activity. Usually, these algorithms have not been validated on a neurophysiological data set for which the actual circuitry is known. Recent work has shown that traditional network inference algorithms based on linear models typically fail to identify the correct coupling of a small central pattern generating circuit in the stomatogastric ganglion of the crab Cancer borealis. In this work, we show that point process models of observed spike trains can guide inference of relative connectivity estimates that match the known physiological connectivity of the central pattern generator up to a choice of threshold. We elucidate the necessary steps to derive faithful connectivity estimates from a model that incorporates the spike train nature of the data. We then apply the model to measure changes in the effective connectivity pattern in response to two pharmacological interventions, which affect both intrinsic neural dynamics and synaptic transmission. Our results provide the first successful application of a network inference algorithm to a circuit for which the actual physiological synapses between neurons are known. The point process methodology presented here generalizes well to larger networks and can describe the statistics of neural populations. In general we show that advanced statistical models allow for the characterization of effective network structure, deciphering underlying network dynamics and estimating information-processing capabilities.
Author Summary
To appreciate how neural circuits control behaviors, we must understand two things. First, how the neurons comprising the circuit are connected, and second, how neurons and their connections change after learning or in response to neuromodulators. Neuronal connectivity is difficult to determine experimentally, whereas neuronal activity can often be readily measured. We describe a statistical model to estimate circuit connectivity directly from measured activity patterns. We use the timing relationships between observed spikes to predict synaptic interactions between simultaneously observed neurons. The model estimate provides each predicted connection with a curve that represents how strongly, and at which temporal delays, one circuit element effectively influences another. These curves are analogous to synaptic interactions of the level of the membrane potential of biological neurons and share some of their features such as being inhibitory or excitatory. We test our method on recordings from the pyloric circuit in the crab stomatogastric ganglion, a small circuit whose connectivity is completely known beforehand, and find that the predicted circuit matches the biological one — a result other techniques failed to achieve. In addition, we show that drug manipulations impacting the circuit are revealed by this technique. These results illustrate the utility of our analysis approach for inferring connections from neural spiking activity.
doi:10.1371/journal.pcbi.1003138
PMCID: PMC3708849  PMID: 23874181
22.  Hidden synaptic differences in a neural circuit underlie differential behavioral susceptibility to a neural injury 
eLife  2014;3:e02598.
Individuals vary in their responses to stroke and trauma, hampering predictions of outcomes. One reason might be that neural circuits contain hidden variability that becomes relevant only when those individuals are challenged by injury. We found that in the mollusc, Tritonia diomedea, subtle differences between animals within the neural circuit underlying swimming behavior had no behavioral relevance under normal conditions but caused differential vulnerability of the behavior to a particular brain lesion. The extent of motor impairment correlated with the site of spike initiation in a specific neuron in the neural circuit, which was determined by the strength of an inhibitory synapse onto this neuron. Artificially increasing or decreasing this inhibitory synaptic conductance with dynamic clamp correspondingly altered the extent of motor impairment by the lesion without affecting normal operation. The results suggest that neural circuit differences could serve as hidden phenotypes for predicting the behavioral outcome of neural damage.
DOI: http://dx.doi.org/10.7554/eLife.02598.001
eLife digest
The outcome of a traumatic brain injury or a stroke can vary considerably from person to person, making it difficult to provide a reliable prognosis for any individual person. If clinicians were able to predict outcomes with better accuracy, patients would benefit from more tailored treatments. However, the sheer complexity of the mammalian brain has hindered attempts to explain why similar damage to the brain can have such different effects on different individuals.
Now Sakurai et al. have used a mollusc model to show that the extensive variation between individuals could be caused by hidden differences in their neural networks. Crucially, this natural variation has no effect on normal behavior; it only becomes obvious when the brain is injured. The experiments were performed on a type of sea slug called Tritonia diomedea.
When these sea slugs encounter a predator they respond by swimming away, rhythmically flexing their whole body. This repetitive motion is driven by a specific neural network in which two neurons—called a cerebral 2 (C2) neuron and a ventral swim interneuron—play important roles. Both of these neurons are quite long and they run alongside each other in the brain, with the ventral swim interneuron being activated by signals sent from the C2 neuron at multiple ‘synaptic connections’ between the two.
Sakurai et al. showed that the strength of the connections between the C2 neuron and the ventral swim interneuron varied substantially between animals. However, despite this variation, the sea slugs still performed the same number of whole-body flexions as they swam.
Sakurai et al. then made a lesion to the brain, which removed about half of the connections between the C2 neuron and the ventral swim interneuron. This meant that the response of the sea slugs to predators depended on the strength of the remaining connections between the two neurons. Sakurai et al. found that the responses of some sea slugs were only mildly impaired, whereas others were severely impaired. This showed that although variations in the strength of the individual connections had no effect on swimming behavior of normal sea slugs, the same variations had a substantial effect when the brain was damaged. Moreover, by creating computer-generated synapses between the C2 neuron and the ventral swim interneuron, Sakurai et al. were able to change the level of impairment.
These findings suggest that the variability in human responses to brain injury could be due to hidden differences at the neuronal level. In everyday life, these differences are unimportant and individuals are able to function in similar ways in spite of subtle differences in their neuronal configurations. However, when the brain is damaged, the differences become more important. This suggests that certain configurations within neuronal networks are more resistant to brain damage than others.
DOI: http://dx.doi.org/10.7554/eLife.02598.002
doi:10.7554/eLife.02598
PMCID: PMC4084405  PMID: 24920390
Tritonia diomedea; individual variability; synapse; neural injury; central pattern generator; dynamic clamp; other
23.  Excitatory, Inhibitory, and Structural Plasticity Produce Correlated Connectivity in Random Networks Trained to Solve Paired-Stimulus Tasks 
The pattern of connections among cortical excitatory cells with overlapping arbors is non-random. In particular, correlations among connections produce clustering – cells in cliques connect to each other with high probability, but with lower probability to cells in other spatially intertwined cliques. In this study, we model initially randomly connected sparse recurrent networks of spiking neurons with random, overlapping inputs, to investigate what functional and structural synaptic plasticity mechanisms sculpt network connections into the patterns measured in vitro. Our Hebbian implementation of structural plasticity causes a removal of connections between uncorrelated excitatory cells, followed by their random replacement. To model a biconditional discrimination task, we stimulate the network via pairs (A + B, C + D, A + D, and C + B) of four inputs (A, B, C, and D). We find networks that produce neurons most responsive to specific paired inputs – a building block of computation and essential role for cortex – contain the excessive clustering of excitatory synaptic connections observed in cortical slices. The same networks produce the best performance in a behavioral readout of the networks’ ability to complete the task. A plasticity mechanism operating on inhibitory connections, long-term potentiation of inhibition, when combined with structural plasticity, indirectly enhances clustering of excitatory cells via excitatory connections. A rate-dependent (triplet) form of spike-timing-dependent plasticity (STDP) between excitatory cells is less effective and basic STDP is detrimental. Clustering also arises in networks stimulated with single stimuli and in networks undergoing raised levels of spontaneous activity when structural plasticity is combined with functional plasticity. In conclusion, spatially intertwined clusters or cliques of connected excitatory cells can arise via a Hebbian form of structural plasticity operating in initially randomly connected networks.
doi:10.3389/fncom.2011.00037
PMCID: PMC3170885  PMID: 21991253
structural plasticity; connectivity; Hebbian learning; network; simulation; correlations; STDP; inhibitory plasticity
24.  Impact of Adaptation Currents on Synchronization of Coupled Exponential Integrate-and-Fire Neurons 
PLoS Computational Biology  2012;8(4):e1002478.
The ability of spiking neurons to synchronize their activity in a network depends on the response behavior of these neurons as quantified by the phase response curve (PRC) and on coupling properties. The PRC characterizes the effects of transient inputs on spike timing and can be measured experimentally. Here we use the adaptive exponential integrate-and-fire (aEIF) neuron model to determine how subthreshold and spike-triggered slow adaptation currents shape the PRC. Based on that, we predict how synchrony and phase locked states of coupled neurons change in presence of synaptic delays and unequal coupling strengths. We find that increased subthreshold adaptation currents cause a transition of the PRC from only phase advances to phase advances and delays in response to excitatory perturbations. Increased spike-triggered adaptation currents on the other hand predominantly skew the PRC to the right. Both adaptation induced changes of the PRC are modulated by spike frequency, being more prominent at lower frequencies. Applying phase reduction theory, we show that subthreshold adaptation stabilizes synchrony for pairs of coupled excitatory neurons, while spike-triggered adaptation causes locking with a small phase difference, as long as synaptic heterogeneities are negligible. For inhibitory pairs synchrony is stable and robust against conduction delays, and adaptation can mediate bistability of in-phase and anti-phase locking. We further demonstrate that stable synchrony and bistable in/anti-phase locking of pairs carry over to synchronization and clustering of larger networks. The effects of adaptation in aEIF neurons on PRCs and network dynamics qualitatively reflect those of biophysical adaptation currents in detailed Hodgkin-Huxley-based neurons, which underscores the utility of the aEIF model for investigating the dynamical behavior of networks. Our results suggest neuronal spike frequency adaptation as a mechanism synchronizing low frequency oscillations in local excitatory networks, but indicate that inhibition rather than excitation generates coherent rhythms at higher frequencies.
Author Summary
Synchronization of neuronal spiking in the brain is related to cognitive functions, such as perception, attention, and memory. It is therefore important to determine which properties of neurons influence their collective behavior in a network and to understand how. A prominent feature of many cortical neurons is spike frequency adaptation, which is caused by slow transmembrane currents. We investigated how these adaptation currents affect the synchronization tendency of coupled model neurons. Using the efficient adaptive exponential integrate-and-fire (aEIF) model and a biophysically detailed neuron model for validation, we found that increased adaptation currents promote synchronization of coupled excitatory neurons at lower spike frequencies, as long as the conduction delays between the neurons are negligible. Inhibitory neurons on the other hand synchronize in presence of conduction delays, with or without adaptation currents. Our results emphasize the utility of the aEIF model for computational studies of neuronal network dynamics. We conclude that adaptation currents provide a mechanism to generate low frequency oscillations in local populations of excitatory neurons, while faster rhythms seem to be caused by inhibition rather than excitation.
doi:10.1371/journal.pcbi.1002478
PMCID: PMC3325187  PMID: 22511861
25.  Information processing in the CNS: a supramolecular chemistry? 
Cognitive Neurodynamics  2015;9(5):463-477.
How does central nervous system process information? Current theories are based on two tenets: (a) information is transmitted by action potentials, the language by which neurons communicate with each other—and (b) homogeneous neuronal assemblies of cortical circuits operate on these neuronal messages where the operations are characterized by the intrinsic connectivity among neuronal populations. In this view, the size and time course of any spike is stereotypic and the information is restricted to the temporal sequence of the spikes; namely, the “neural code”. However, an increasing amount of novel data point towards an alternative hypothesis: (a) the role of neural code in information processing is overemphasized. Instead of simply passing messages, action potentials play a role in dynamic coordination at multiple spatial and temporal scales, establishing network interactions across several levels of a hierarchical modular architecture, modulating and regulating the propagation of neuronal messages. (b) Information is processed at all levels of neuronal infrastructure from macromolecules to population dynamics. For example, intra-neuronal (changes in protein conformation, concentration and synthesis) and extra-neuronal factors (extracellular proteolysis, substrate patterning, myelin plasticity, microbes, metabolic status) can have a profound effect on neuronal computations. This means molecular message passing may have cognitive connotations. This essay introduces the concept of “supramolecular chemistry”, involving the storage of information at the molecular level and its retrieval, transfer and processing at the supramolecular level, through transitory non-covalent molecular processes that are self-organized, self-assembled and dynamic. Finally, we note that the cortex comprises extremely heterogeneous cells, with distinct regional variations, macromolecular assembly, receptor repertoire and intrinsic microcircuitry. This suggests that every neuron (or group of neurons) embodies different molecular information that hands an operational effect on neuronal computation.
doi:10.1007/s11571-015-9337-1
PMCID: PMC4567996  PMID: 26379797
Information; Processing; Action potential; Neural code; Supramolecular; Macromolecule; Embodiment

Results 1-25 (1693487)