news

Two Postdocs Position Open!

The Neural Dynamics Lab, also known as the CATNIP Lab, headed by Memming Park at the Champalimaud Centre for the Unknown, Lisbon, Portugal, is looking for talented next generation scientists and engineers to advance neurotechnology and neural data analysis tools to better understand neural dynamics. Our broad goal is to obtain an effective systems-level description of relevant neural dynamics in the context of cognitive functions and dysfunctions, such as working memory, decision-making, motor control, and disorders of consciousness.

Read More

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

SpikeCaKe Semi-Analytic Nonparametric Bayesian Inference for Spike-Spike Neuronal Connectivity (2019)

Luca Ambrogioni, Patrick Ebel, Max Hinne, Umut Guclu, Marcel van Gerven, Eric Maris, Kamalika Chaudhuri, Masashi Sugiyama. The 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019, 16-18 April 2019, Naha, Okinawa, Japan (2019) (The 22nd International Conference on Artificial Intelligence and Statistics, AISTATS 2019, 16-18 April 2019, Naha, Okinawa, Japan) (local cache)

Read More

journalclub

journalclub

journalclub

journalclub

journalclub

news

COSYNE 2020 Impressions

This year’s COSYNE had great content in all areas of computational neuroscience. In this post we give highlights of some of the posters, talks, and workshop sessions that we found to be interesting or relevant.

Read More

journalclub

journalclub

journalclub

journalclub

news

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

journalclub

Recurrent Network Models of Sequence Generation and Memory (2016)

  • Paper Kanaka Rajan, Christopher D. Harvey, David W. Tank. Recurrent Network Models of Sequence Generation and Memory, Neuron.(2016)
  • Abstract

Sequential activation of neurons is a common feature of network activity during a variety of behaviors, including working memory and decision making. Previous network models for sequences and memory emphasized specialized architectures in which a principled mechanism is pre-wired into their connectivity. Here we demonstrate that, starting from random connectivity and modifying a small fraction of connections, a largely disordered recurrent network can produce sequences and implement working memory efficiently. We use this process, called Partial In-Network Training (PINning), to model and match cellular resolution imaging data from the posterior parietal cortex during a virtual memoryguided two-alternative forced-choice task. Analysis of the connectivity reveals that sequences propagate by the cooperation between recurrent synaptic interactions and external inputs, rather than through feedforward or asymmetric connections. Together our results suggest that neural sequences may emerge through learning from largely unstructured network architectures.

Read More

journalclub

Neural Representation of Spatial Topology in the Rodent Hippocampus (2014)

  • Paper Zen Chen, Stephen N. Gomperts, Jun Yamamoto, and Matthew A. Wilson. Neural Representation of Spatial Topology in the Rodent Hippocampus, Neural Computation.(2014)
  • Abstract

Pyramidal cells in the rodent hippocampus often exhibit clear spatial tuning in navigation. Although it has been long suggested that pyramidal cell activity may underlie a topological code rather than a topographic code, it remains unclear whether an abstract spatial topology can be encoded in the ensemble spiking activity of hippocampal place cells. Using a statistical approach developed previously, we investigate this question and related issues in greater details. We recorded ensembles of hippocampal neurons as rodents freely foraged in one and two-dimensional spatial environments, and we used a “decode-to-uncover” strategy to examine the temporally structured patterns embedded in the ensemble spiking activity in the absence of observed spatial correlates during periods of rodent navigation or awake immobility. Specifically, the spatial environment was represented by a finite discrete state space. Trajectories across spatial locations (“states”) were associated with consistent hippocampal ensemble spiking patterns, which were characterized by a state transition matrix. From this state transition matrix, we inferred a topology graph that defined the connectivity in the state space. In both one and two-dimensional environments, the extracted behavior patterns from the rodent hippocampal population codes were compared against randomly shuffled spike data. In contrast to a topographic code, our results support the efficiency of topological coding in the presence of sparse sample size and fuzzy space mapping. This computational approach allows us to quantify the variability of ensemble spiking activity, to examine hippocampal population codes during off-line states, and to quantify the topological complexity of the environment.

Read More

journalclub

Stabilizing embedology - Geometry-preserving delay-coordinate maps (2018)

  • Paper Armin Eftekhari, Han Lun Yap, Michael B. Wakin, and Christopher J. Rozell. Stabilizing embedology: Geometry-preserving delay-coordinate maps, Physical Review. (2018)
  • Abstract

Delay-coordinate mapping is an effective and widely used technique for reconstructing and analyzing the dynamics of a nonlinear system based on time-series outputs. The efficacy of delay-coordinate mapping has long been supported by Takens’ embedding theorem, which guarantees that delay-coordinate maps use the time-series output to provide a reconstruction of the hidden state space that is a one-to-one embedding of the system’s attractor. While this topological guarantee ensures that distinct points in the reconstruction correspond to distinct points in the original state space, it does not characterize the quality of this embedding or illuminate how the specific parameters affect the reconstruction. In this paper, we extend Takens’ result by establishing conditions under which delay-coordinate mapping is guaranteed to provide a stable embedding of a system’s attractor. Beyond only preserving the attractor topology, a stable embedding preserves the attractor geometry by ensuring that distances between points in the state space are approximately preserved. In particular, we find that delay-coordinate mapping stably embeds an attractor of a dynamical system if the stable rank of the system is large enough to be proportional to the dimension of the attractor. The stable rank reflects the relation between the sampling interval and the number of delays in delay-coordinate mapping. Our theoretical findings give guidance to choosing system parameters, echoing the tradeoff between irrelevancy and redundancy that has been heuristically investigated in the literature. Our initial result is stated for attractors that are smooth submanifolds of Euclidean space, with extensions provided for the case of strange attractors.

Read More

journalclub

A Sparse Coding Model with Synaptically Local Plasticity and Spiking Neurons Can Account for the Diverse Shapes of V1 Simple Cell Receptive Fields (2011)

  • Paper Joel Zylberberg, Jason Timothy Murphy, Michael Robert DeWeese. A Sparse Coding Model with Synaptically Local Plasticity and Spiking Neurons Can Account for the Diverse Shapes of V1 Simple Cell Receptive Fields, PLOS. (2011)
  • Abstract

Sparse coding algorithms trained on natural images can accurately predict the features that excite visual cortical neurons, but it is not known whether such codes can be learned using biologically realistic plasticity rules. We have developed a biophysically motivated spiking network, relying solely on synaptically local information, that can predict the full diversity of V1 simple cell receptive field shapes when trained on natural images. This represents the first demonstration that sparse coding principles, operating within the constraints imposed by cortical architecture, can successfully reproduce these receptive fields. We further prove, mathematically, that sparseness and decorrelation are the key ingredients that allow for synaptically local plasticity rules to optimize a cooperative, linear generative image model formed by the neural representation. Finally, we discuss several interesting emergent properties of our network, with the intent of bridging the gap between theoretical and experimental studies of visual cortex.

Read More

journalclub

Bayesian Machine Learning- EEG\/MEG signal processing measurements (2016)

  • Paper Wei Wu, Srikantan Nagarajan, and Zhe Chen. Bayesian Machine Learning- EEG\/MEG signal processing measurements, IEEE. (2016)
  • Abstract

Electroencephalography (EEG) and magnetoencephalography (MEG) are the most common noninvasive brain-imaging techniques for monitoring electrical brain activity and inferring brain function. The central goal of EEG/MEG analysis is to extract informative brain spatiotemporal?spectral patterns or to infer functional connectivity between different brain areas, which is directly useful for neuroscience or clinical investigations. Due to its potentially complex nature [such as nonstationarity, high dimensionality, subject variability, and low signal-to-noise ratio (SNR)], EEG/MEG signal processing poses some great challenges for researchers. These challenges can be addressed in a principled manner via Bayesian machine learning (BML). BML is an emerging field that integrates Bayesian statistics, variational methods, and machine-learning techniques to solve various problems from regression, prediction, outlier detection, feature extraction, and classification. BML has recently gained increasing attention and widespread successes in signal processing and big-data analytics, such as in source reconstruction, compressed sensing, and information fusion. To review recent advances and to foster new research ideas, we provide a tutorial on several important emerging BML research topics in EEG/MEG signal processing and present representative examples in EEG/MEG applications.

Read More

journalclub

Inference of neuronal functional circuitry with spike-triggered non-negative matrix factorization (2017)

  • Paper Jian K. Liu, Helene M. Schreyer, Arno Onken, Fernando Rozenblit, Mohammad H. Khani, Vidhyasankar Krishnamoorthy, Stefano Panzeri & Tim Gollisch. Inference of neuronal functional circuitry with spike-triggered non-negative matrix factorization, Nature Communications. (2017)
  • Abstract

Neurons in sensory systems often pool inputs over arrays of presynaptic cells, giving rise to functional subunits inside a neuron’s receptive field. The organization of these subunits provides a signature of the neuron’s presynaptic functional connectivity and determines how the neuron integrates sensory stimuli. Here we introduce the method of spike-triggered nonnegative matrix factorization for detecting the layout of subunits within a neuron’s receptive field. The method only requires the neuron’s spiking responses under finely structured sensory stimulation and is therefore applicable to large populations of simultaneously recorded neurons. Applied to recordings from ganglion cells in the salamander retina, the method retrieves the receptive fields of presynaptic bipolar cells, as verified by simultaneous bipolar and ganglion cell recordings. The identified subunit layouts allow improved predictions of ganglion cell responses to natural stimuli and reveal shared bipolar cell input into distinct types of ganglion cells.

Read More

journalclub

Complementary codes for odor identity and intensity in olfactory cortex (2017)

  • Paper Kevin A Bolding, Kevin M Franks. Complementary codes for odor identity and intensity in olfactory cortex, eLIFE. (2017)
  • Abstract

The ability to represent both stimulus identity and intensity is fundamental for perception. Using large-scale population recordings in awake mice, we find distinct coding strategies facilitate non-interfering representations of odor identity and intensity in piriform cortex. Simply knowing which neurons were activated is sufficient to accurately represent odor identity, with no additional information about identity provided by spike time or spike count. Decoding analyses indicate that cortical odor representations are not sparse. Odorant concentration had no systematic effect on spike counts, indicating that rate cannot encode intensity. Instead, odor intensity can be encoded by temporal features of the population response. We found a subpopulation of rapid, largely concentration-invariant responses was followed by another population of responses whose latencies systematically decreased at higher concentrations. Cortical inhibition transforms olfactory bulb output to sharpen these dynamics. Our data therefore reveal complementary coding strategies that can selectively represent distinct features of a stimulus.

Read More

news

COSYNE 2018 Workshops

After a cold trip, we finally made it to Breckenridge. I felt lucky to get on a shuttle before the storm that blocked the way. The resort was really a nice and warm place to heal my headache.

Read More

journalclub

Identifying musical pieces from fMRI data using encoding and decoding models (2018)

  • Paper Sebastian Hoefle, Annerose Engel, Rodrigo Basilio, Vinoo Alluri, Petri Toiviainen, Maurício Cagy & Jorge Moll. Identifying musical pieces from fMRI data using encoding and decoding models, Scientific Reports. (2018)
  • Abstract

Encoding models can reveal and decode neural representations in the visual and semantic domains. However, a thorough understanding of how distributed information in auditory cortices and temporal evolution of music contribute to model performance is still lacking in the musical domain. We measured fMRI responses during naturalistic music listening and constructed a two-stage approach that first mapped musical features in auditory cortices and then decoded novel musical pieces. We then probed the influence of stimuli duration (number of time points) and spatial extent (number of voxels) on decoding accuracy. Our approach revealed a linear increase in accuracy with duration and a point of optimal model performance for the spatial extent. We further showed that Shannon entropy is a driving factor, boosting accuracy up to 95% for music with highest information content. These findings provide key insights for future decoding and reconstruction algorithms and open new venues for possible clinical applications.

Read More

journalclub

Task-dependent recurrent dynamics in visual cortex (2017)

  • Paper Satohiro Tajima, Kowa Koida, Chihiro I Tajima, Hideyuki Suzuki, Kazuyuki Aihara, Hidehiko Komatsu. Task-dependent recurrent dynamics in visual cortex, eLIFE. (2017)
  • Abstract

The capacity for flexible sensory-action association in animals has been related to context-dependent attractor dynamics outside the sensory cortices. Here, we report a line of evidence that flexibly modulated attractor dynamics during task switching are already present in the higher visual cortex in macaque monkeys. With a nonlinear decoding approach, we can extract the particular aspect of the neural population response that reflects the task-induced emergence of bistable attractor dynamics in a neural population, which could be obscured by standard unsupervised dimensionality reductions such as PCA. The dynamical modulation selectively increases the information relevant to task demands, indicating that such modulation is beneficial for perceptual decisions. A computational model that features nonlinear recurrent interaction among neurons with a task-dependent background input replicates the key properties observed in the experimental data. These results suggest that the context-dependent attractor dynamics involving the sensory cortex can underlie flexible perceptual abilities.

Read More

journalclub

Cerebellar granule cells encode the expectation of reward (2017)

  • Paper Mark J. Wagner, Tony Hyun Kim1, Joan Savall, Mark J. Schnitzer, Liqun Luo. Cerebellar granule cells encode the expectation of reward, Nature. (2017)
  • Abstract

The human brain contains approximately 60 billion cerebellar granule cells, which outnumber all other brain neurons combined. Classical theories posit that a large, diverse population of granule cells allows for highly detailed representations of sensorimotor context, enabling downstream Purkinje cells to sense fine contextual changes. Although evidence suggests a role for the cerebellum in cognition, granule cells are known to encode only sensory and motor context. Here, using two-photon calcium imaging in behaving mice, we show that granule cells convey information about the expectation of reward. Mice initiated voluntary forelimb movements for delayed sugar-water reward. Some granule cells responded preferentially to reward or reward omission, whereas others selectively encoded reward anticipation. Reward responses were not restricted to forelimb movement, as a Pavlovian task evoked similar responses. Compared to predictable rewards, unexpected rewards elicited markedly different granule cell activity despite identical stimuli and licking responses. In both tasks, reward signals were widespread throughout multiple cerebellar lobules. Tracking the same granule cells over several days of learning revealed that cells with reward-anticipating responses emerged from those that responded at the start of learning to reward delivery, whereas reward-omission responses grew stronger as learning progressed. The discovery of predictive, non-sensorimotor encoding in granule cells is a major departure from the current understanding of these neurons and markedly enriches the contextual information available to postsynaptic Purkinje cells, with important implications for cognitive processing in the cerebellum.

Read More

journalclub

Motor Cortex Embeds Muscle-like Commands in an Untangled Population Response (2018)

  • Paper Abigail A. Russo, Sean R. Bittner, Sean M. Perkins, Jeffrey S. Seely, Brian M. London, Antonio H. Lara, Andrew Miri, Najja J. Marshall, Adam Kohn, Thomas M. Jessell, Laurence F. Abbott, John P. Cunningham, and Mark M. Churchland. Motor Cortex Embeds Muscle-like Commands in an Untangled Population Response, Neuron. (2018)
  • Abstract

Primate motor cortex projects to spinal interneurons and motoneurons, suggesting that motor cortex activity may be dominated by muscle-like commands. Observations during reaching lend support to this view, but evidence remains ambiguous and much debated. To provide a different perspective, we employed a novel behavioral paradigm that facilitates comparison between time-evolving neural and muscle activity. We found that single motor cortex neurons displayed many muscle-like properties, but the structure of population activity was not muscle-like. Unlike muscle activity, neural activity was structured to avoid ‘‘tangling’’: moments where similar activity patterns led to dissimilar future patterns. Avoidance of tangling was present across tasks and species. Network models revealed a potential reason for this consistent feature: low tangling confers noise robustness. Finally, we were able to predict motor cortex activity from muscle activity by leveraging the hypothesis that muscle-like commands are embedded in additional structure that yields low tangling.

Read More

journalclub

Physical Mechanism of mind changes and tradeoffs among speed, accuracy, and energy gost in brain decision making - Landscape and flux perspective. (2016)

  • Paper Han Yan, Kun Zhang and Jin Wang. Physical mechanism of mind changes and tradeoffs among speed, accuracy, and energy cost in brain decision making: Landscape and flux perspective, Chin. Phys. B. (2016)
  • Abstract

Cognitive behaviors are determined by underlying neural networks. Many brain functions, such as learning and memory, have been successfully described by attractor dynamics. For decision making in the brain, a quantitative description of global attractor landscapes has not yet been completely given. Here, we developed a theoretical framework to quantify the landscape associated with the steady state probability distributions and associated steady state curl flux, measuring the degree of non-equilibrium through the degree of detailed balance breaking for decision making. We quantified the decision-making processes with optimal paths from the undecided attractor states to the decided attractor states, which are identified as basins of attractions, on the landscape. Both landscape and flux determine the kinetic paths and speed. The kinetics and global stability of decision making are explored by quantifying the landscape topography through the barrier heights and the mean first passage time. Our theoretical predictions are in agreement with experimental observations: more errors occur under time pressure. We quantitatively explored two mechanisms of the speed-accuracy tradeoff with speed emphasis and further uncovered the tradeoffs among speed, accuracy, and energy cost. Our results imply that there is an optimal balance among speed, accuracy, and the energy cost in decision making. We uncovered the possible mechanisms of changes of mind and how mind changes improve performance in decision processes. Our landscape approach can help facilitate an understanding of the underlying physical mechanisms of cognitive processes and identify the key factors in the corresponding neural networks.

Read More

newsother

journalclub

A theory of multineuronal dimensionality, dynamics and measurement (2017)

  • Paper Peiran Gao, Eric Trautmann, Byron M. Yu, Gopal Santhanam, Stephen Ryu, Krishna Shenoy, Surya Ganguli. A theory of multineuronal dimensionality, dynamics and measurement, bioRxiv, (2017)

  • Abstract

In many experiments, neuroscientists tightly control behavior, record many trials, and obtain trial-averaged firing rates from hundreds of neurons in circuits containing billions of behaviorally relevant neurons. Dimensionality reduction methods reveal a striking simplicity underlying such multi-neuronal data: they can be reduced to a low-dimensional space, and the resulting neural trajectories in this space yield a remarkably insightful dynamical portrait of circuit computation. This simplicity raises profound and timely conceptual questions. What are its origins and its implications for the complexity of neural dynamics? How would the situation change if we recorded more neurons? When, if at all, can we trust dynamical portraits obtained from measuring an infinitesimal fraction of task relevant neurons? We present a theory that answers these questions, and test it using physiological recordings from reaching monkeys. This theory reveals conceptual insights into how task complexity governs both neural dimensionality and accurate recovery of dynamic portraits, thereby providing quantitative guidelines for future large-scale experimental design.

Read More

journalclub

A Point Process Framework for Relating Neural Spiking Activity to Spiking History, Neural Ensemble, and Extrinsic Covariate Effects (2005)

  • Paper Wilson Truccolo, Uri T.Eden, Matthew R. Fellows, John P. Donoghue, Emery N. Brown. A Point Process Framework for Relating Neural Spiking Activity to Spiking History, Neural Ensemble, and Extrinsic Covariate Effects, Journal of Neurophysiology, 93(2), 1074-1089 (2005)

  • Abstract

Multiple factors simultaneously affect the spiking activity of individual neurons. Determining the effects and relative importance of these factors is a challenging problem in neurophysiology. We propose a statistical framework based on the point process likelihood function to relate a neuron’s spiking probability to three typical covariates: the neuron’s own spiking history, concurrent ensemble activity, and extrinsic covariates such as stimuli or behavior. The framework uses parametric models of the conditional intensity function to define a neuron’s spiking probability in terms of the covariates. The discrete time likelihood function for point processes is used to carry out model fitting and model analysis. We show that, by modeling the logarithm of the conditional intensity function as a linear combination of functions of the covariates, the discrete time point process likelihood function is readily analyzed in the generalized linear model (GLM) framework. We illustrate our approach for both GLM and non-GLM likelihood functions using simulated data and multivariate single-unit activity data simultaneously recorded from the motor cortex of a monkey performing a visuomotor pursuit-tracking task. The point process framework provides a flexible, computationally efficient approach for maximum likelihood estimation, goodness-of-fit assessment, residual analysis, model selection, and neural decoding. The framework thus allows for the formulation and analysis of point process models of neural spiking activity that readily capture the simultaneous effects of multiple covariates and enables the assessment of their relative importance.

Read More

journalclub

Cannabinoids disrupt memory encoding by functionally isolating hippocampal CA1 from CA3 (2017)

  • Paper Roman A. Sandler, Dustin Fetterhoff, Robert E. Hampson, Sam A. Deadwyler, Vasilis Z. Marmarelis . Cannabinoids disrupt memory encoding by functionally isolating hippocampal CA1 from CA3, PLOS Computational Biology, 13(7), pp1-16 (2017)

  • Abstract

Much of the research on cannabinoids (CBs) has focused on their effects at the molecular and synaptic level. However, the effects of CBs on the dynamics of neural circuits remains poorly understood. This study aims to disentangle the effects of CBs on the functional dynamics of the hippocampal Schaffer collateral synapse by using data-driven nonparametric modeling. Multi-unit activity was recorded from rats doing an working memory task in control sessions and under the influence of exogenously administered tetrahydrocannabinol (THC), the primary CB found in marijuana. It was found that THC left firing rate unaltered and only slightly reduced theta oscillations. Multivariate autoregressive models, estimated from spontaneous spiking activity, were then used to describe the dynamical transformation from CA3 to CA1. They revealed that THC served to functionally isolate CA1 from CA3 by reducing feedforward excitation and theta information flow. The functional isolation was compensated by increased feedback excitation within CA1, thus leading to unaltered firing rates. Finally, both of these effects were shown to be correlated with memory impairments in the working memory task. By elucidating the circuit mechanisms of CBs, these results help close the gap in knowledge between the cellular and behavioral effects of CBs.

  • Author Summary

Research into cannabinoids (CBs) over the last several decades has found that they induce a large variety of oftentimes opposing effects on various neuronal receptors and processes. Due to this plethora of effects, disentangling how CBs influence neuronal circuits has proven challenging. This paper contributes to our understanding of the circuit level effects of CBs by using data driven modeling to examine how THC affects the input-output relationship in the Schaffer collateral synapse in the hippocampus. It was found that THC functionally isolated CA1 from CA3 by reducing feedforward excitation and theta information flow while simultaneously increasing feedback excitation within CA1. By elucidating the circuit mechanisms of CBs, these results help close the gap in knowledge between the cellular and behavioral effects of CBs.

Read More

journalclub

Structure in neural population recordings : an expected byproduct of simpler phenomena? (2017)

  • Paper Gamaleldin F. Elsayed, John P. Cunningham, Structure in neural population recordings : an expected byproduct of simpler phenomena?, Nature Neuroscience,20, 1310–1318 (2017)

  • Abstract

Neuroscientists increasingly analyze the joint activity of multineuron recordings to identify population-level structures believed to be significant and scientifically novel. Claims of significant population structure support hypotheses in many brain areas. However, these claims require first investigating the possibility that the population structure in question is an expected byproduct of simpler features known to exist in data. Classically, this critical examination can be either intuited or addressed with conventional controls. However, these approaches fail when considering population data, raising concerns about the scientific merit of population-level studies. Here we develop a framework to test the novelty of population-level findings against simpler features such as correlations across times, neurons and conditions. We apply this framework to test two recent population findings in prefrontal and motor cortices, providing essential context to those studies. More broadly, the methodologies we introduce provide a general neural population control for many population-level hypotheses.

Read More

journalclub

Visualizing Data using t-SNE (2008)

  • Paper Laurens van der Maaten, Geoffrey Hinton. Visualizing Data using t-SNE, J. Machine Learning Research,9(Nov):2579–2605, (2008)

  • Abstract

We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images ofobjects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large data sets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of data sets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the data sets.

Read More

journalclub

Fronto-parietal Cortical Circuits Encode Accumulated Evidence with a Diversity of Timescales (2017)

  • Paper Benjamin B. Scott, Christine M. Constantinople, Athena Akrami, Timothy D. Hanks, Carlos D. Brody, and David W. Tank. Fronto-parietal Cortical Circuits Encode Accumulated Evidence with a Diversity of Timescales, Neuron, 95(2), 385-398.e5 (2017)

  • Abstract

Decision-making in dynamic environments often involves accumulation of evidence, in which new information is used to update beliefs and select future actions. Using in vivo cellular resolution imaging in voluntarily head-restrained rats, we examined the responses of neurons in frontal and parietal cortices during a pulse-based accumulation of evidence task. Neurons exhibited activity that predicted the animal?s upcoming choice, previous choice, and graded responses that reflected the strength of the accumulated evidence. The pulsatile nature of the stimuli enabled characterization of the responses of neurons to a single quantum (pulse) of evidence. Across the population, individual neurons displayed extensive heterogeneity in the dynamics of responses to pulses. The diversity of responses was sufficiently rich to form a temporal basis for accumulated evidence estimated from a latent variable model. These results suggest that heterogeneous, often transient sensory responses distributed across the fronto-parietal cortex may support working memory on behavioral timescales.

Read More

journalclub

Untangling Brain-Wide Dynamics in Consciousness by Cross-Embedding (2015)

  • Paper Satohiro Tajima, Toru Yanagawa, Naotaka Fujii, Taro Toyoizumi. Untangling Brain-Wide Dynamics in Consciousness by Cross-Embedding, PLOS Computational Biology, 11, 1-28 (2015)

  • Abstract

Advances in recording technologies have enabled the acquisition of neuronal dynamics data at unprecedented scale and resolution, but the increase in data complexity challenges reductionist model-based approaches. Motivated by generic theorems of dynamical systems, we characterize model-free, nonlinear embedding relationships for wide-field electrophysiological data from behaving monkeys. This approach reveals a universality of inter-areal interactions and complexity in conscious brain dynamics, demonstrating its wide application to deciphering complex neuronal systems.

Read More

journalclub

Demixed principal component analysis of neural population data (2016)

  • Paper Kobak, Dmitry and Brendel, Wieland and Constantinidis, Christos and Feierstein, Claudia E and Kepecs, Adam and Mainen, Zachary F and Qi, Xue-Lian and Romo, Ranulfo and Uchida, Naoshige and Machens, Christian K. Demixed principal component analysis of neural population data, eLife, 5, e10989 (2016)

  • Abstract

Neurons in higher cortical areas, such as the prefrontal cortex, are often tuned to a variety of sensory and motor variables, and are therefore said to display mixed selectivity. This complexity of single neuron responses can obscure what information these areas represent and how it is represented. Here we demonstrate the advantages of a new dimensionality reduction technique, demixed principal component analysis (dPCA), that decomposes population activity into a few components. In addition to systematically capturing the majority of the variance of the data, dPCA also exposes the dependence of the neural representation on task parameters such as stimuli, decisions, or rewards. To illustrate our method we reanalyze population data from four datasets comprising different species, different cortical areas and different experimental tasks. In each case, dPCA provides a concise way of visualizing the data that summarizes the task-dependent features of the population response in a single figure.

Read More

journalclub

The Spatiotemporal Organization of the Striatum Encodes Action Space (2017)

  • Paper Klaus, Andreas and Martins, Gabriela J. and Paixao, Vitor B. and Zhou, Pengcheng and Paninski, Liam and Costa, Rui M.. The Spatiotemporal Organization of the Striatum Encodes Action Space, Neuron, 95(5), 1171–1180 (2017)

  • Abstract

Activity in striatal direct- and indirect-pathway spiny projection neurons (SPNs) is critical for proper movement. However, little is known about the spatiotemporal organization of this activity. We investigated the spatiotemporal organization of SPN ensemble activity in mice during self-paced, natural movements using microendoscopic imaging. Activity in both pathways showed predominantly local but also some long-range correlations. Using a novel approach to cluster and quantify behaviors based on continuous accelerometer and video data, we found that SPN ensembles active during specific actions were spatially closer and more correlated overall. Furthermore, similarity between different actions corresponded to the similarity between SPN ensemble patterns, irrespective of movement speed. Consistently, the accuracy of decoding behavior from SPN ensemble patterns was directly related to the dissimilarity between behavioral clusters. These results identify a predominantly local, but not spatially compact, organization of direct- and indirect-pathway SPN activity that maps action space independently of movement speed.

Read More

journalclub

journalclub

From Whole-Brain Data to Functional Circuit Models, The Zebrafish Optomotor Response (2016)

  • Paper Naumann, Eva A. and Fitzgerald, James E. and Dunn, Timothy W. and Rihel, Jason and Sompolinsky, Haim and Engert, Florian. From Whole-Brain Data to Functional Circuit Models, The Zebrafish Optomotor Response, Cell, 167(4), 947–960.e20 (2016)

  • Abstract

Detailed descriptions of brain-scale sensorimotor circuits underlying vertebrate behavior remain elusive. Recent advances in zebrafish neuroscience offer new opportunities to dissect such circuits via whole-brain imaging, behavioral analysis, functional perturbations, and network modeling. Here, we harness these tools to generate a brain-scale circuit model of the optomotor response, an orienting behavior evoked by visual motion. We show that such motion is processed by diverse neural response types distributed across multiple brain regions. To transform sensory input into action, these regions sequentially integrate eye- and direction-specific sensory streams, refine representations via interhemispheric inhibition, and demix locomotor instructions to independently drive turning and forward swimming. While experiments revealed many neural response types throughout the brain, modeling identified the dimensions of functional connectivity most critical for the behavior. We thus reveal how distributed neurons collaborate to generate behavior and illustrate a paradigm for distilling functional circuit models from whole-brain data.

Read More

journalclub

A spiral attractor network drives locomotion in Aplysia (2017)

  • Paper Angela M Bruno, W N Frost, M D Humphries. A spiral attractor network drives locomotion in Aplysia, bioRxiv, (2017)

  • Abstract

The neural control of motor behaviour arises from the joint activity of large neuron populations. Unknown is what underlying dynamical system generates this joint activity. Here we show that the network-wide activity driving locomotion of the sea-slug Aplysia is a low-dimensional spiral attractor. We imaged large populations at single-spike resolution from the Aplysia’s pedal ganglion during fictive locomotion. Evoking locomotion rapidly moved the population activity from irregular spontaneous activity into a low-dimensional, periodic, decaying, orbit. This orbit was a true attractor: the activity returned to the same orbit after transient perturbation; and repeatedly evoking locomotion caused the activity to converge on the same low-dimensional orbit. To show this attractor was the locomotion program, we accurately decoded simultaneous recordings of neck motorneuron activity directly from the low-dimensional orbit. Our results provide direct evidence that neural circuits are periodic attractors. Consequently, they support the long-held hypothesis that population activity is an emergent property of a simpler underlying system.

Read More

journalclub

Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks (2017)

  • Paper Ryan Pyle and Robert Rosenbaum, Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks, Physical Review Letters, 118, 018103 (2017)

  • Abstract

Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework

Read More

journalclub

Neural Quadratic Discriminant Analysis, Nonlinear Decoding with V1-Like Computation (2016)

  • Paper M. Pagan and E. P. Simoncelli and N. C. Rust, Neural Quadratic Discriminant Analysis, Nonlinear Decoding with V1-Like Computation, Neural Computation, 28,28, 2291-2319 (2016)

  • Abstract

Linear-nonlinear (LN) models and their extensions have proven successful in describing transformations from stimuli to spiking responses of neurons in early stages of sensory hierarchies. Neural responses at later stages are highly nonlinear and have generally been better characterized in terms of their decoding performance on prespecified tasks. Here we develop a biologically plausible decoding model for classification tasks, that we refer to as neural quadratic discriminant analysis (nQDA). Specifically, we reformulate an optimal quadratic classifier as an LN-LN computation, analogous to “subunit” encoding models that have been used to describe responses in retina and primary visual cortex. We propose a physiological mechanism by which the parameters of the nQDA classifier could be optimized, using a supervised variant of a Hebbian learning rule. As an example of its applicability, we show that nQDA provides a better account than many comparable alternatives for the transformation between neural representations in two high-level brain areas recorded as monkeys performed a visual delayed-match-to-sample task

Read More

journalclub

Overcoming catastrophic forgetting in neural networks (2017)

  • Paper Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., Hassabis, D., Clopath, C., Kumaran, D., and Hadsell, R. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences (PNAS), pages 201611835+

  • Significance

Deep neural networks are currently the most successful machine-learning technique for solving a variety of tasks, including language translation, image classification, and image generation. One weakness of such models is that, unlike humans, they are unable to learn multiple tasks sequentially. In this work we propose a practical solution to train such models sequentially by protecting the weights important for previous tasks. This approach, inspired by synaptic consolidation in neuroscience, enables state of the art results on multiple reinforcement learning problems experienced sequentially.

*Abstract

The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially.

Read More

news

news

news

news

journalclub

Hippocampal Place Cells Couple to Three Different Gamma Oscillations during Place Field Traversal (2016)

  • Paper Balint Lasztoczi and Thomas Klausberger, Hippocampal Place Cells Couple to Three Different Gamma Oscillations during Place Field Traversal, Neuron, Vol 91,Issue 1, p34-40 (2016)

  • Summary

Three distinct gamma oscillations, generated in different CA1 layers, occur at different phases of concurrent theta oscillation. In parallel, firing of place cells displays phase advancement over successive cycles of theta oscillations while an animal passes through the place field. Is the theta-phase-precessing output of place cells shaped by distinct gamma oscillations along different theta phases during place field traversal? We simultaneously recorded firing of place cells and three layer-specific gamma oscillations using current-source-density analysis of multi-site field potential measurements in mice. We show that spike timing of place cells can tune to all three gamma oscillations, but phase coupling to the mid-frequency gamma oscillation conveyed from the entorhinal cortex was restricted to leaving a place field. A subset of place cells coupled to two different gamma oscillations even during single-place field traversals. Thus, an individual CA1 place cell can combine and relay information from multiple gamma networks while the animal crosses the place field.

Read More

journalclub

Cracking the Neural Code for Sensory Perception by Combining Statistics, Intervention, and Behavior (2017)

  • Paper Stefano Panzeri, Christopher D Harvey, Eugenio Piasini, Peter E. Latham, Tommaso Fellini, Cracking the Neural Code for Sensory Perception by Combining Statistics, Intervention, and Behavior, Neuron, 93, p491-507 (2017)

  • Abstract

The two basic processes underlying perceptual decisions—how neural responses encode stimuli, and how they inform behavioral choices—have mainly been studied separately. Thus, although many spatiotemporal features of neural population activity, or “neural codes,” have been shown to carry sensory information, it is often unknown whether the brain uses these features for perception. To address this issue, we propose a new framework centered on redefining the neural code as the neural features that carry sensory information used by the animal to drive appropriate behavior; that is, the features that have an intersection between sensory and choice information. We show how this framework leads to a new statistical analysis of neural activity recorded during behavior that can identify such neural codes, and we discuss how to combine intersection-based analysis of neural recordings with intervention on neural activity to determine definitively whether specific neural activity features are involved in a task.

Read More

journalclub

Could a neuroscientist understand a microprocessor? (2017)

There is a popular belief in neuroscience that we are primarily data limited, and that produc- ing large, multimodal, and complex datasets will, with the help of advanced data analysis algorithms, lead to fundamental insights into the way the brain processes information. These datasets do not yet exist, and if they did we would have no way of evaluating whether or not the algorithmically-generated insights were sufficient or even correct. To address this, here we take a classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the microprocessor. This suggests current analytic approaches in neuroscience may fall short of producing meaningful understanding of neural systems, regardless of the amount of data. Additionally, we argue for scientists using complex non-linear dynamical systems with known ground truth, such as the microprocessor as a validation platform for time-series and structure discovery methods.

  • Author Summary

Neuroscience is held back by the fact that it is hard to evaluate if a conclusion is correct; the complexity of the systems under study and their experimental inaccessability make the assessment of algorithmic and data analytic technqiues challenging at best. We thus argue for testing approaches using known artifacts, where the correct interpretation is known. Here we present a microprocessor platform as one such test case. We find that many approaches in neuroscience, when used naïvely, fall short of producing a meaningful understanding

Read More

journalclub

Imprinting and recalling cortical ensembles (2016)

  • Paper Luis Carrillo-Reid, Weijian Yang, Yuki Bando, Darcy S. Peterka, Rafael Yuste, Imprinting and recalling cortical ensembles, Science, 353, 691-694 (2016)

  • Supplementary Material

  • Abstract

Neuronal ensembles are coactive groups of neurons that may represent building blocks of cortical circuits. These ensembles could be formed by Hebbian plasticity, whereby synapses between coactive neurons are strengthened. Here we report that repetitive activation with two-photon optogenetics of neuronal populations from ensembles in the visual cortex of awake mice builds neuronal ensembles that recur spontaneously after being imprinted and do not disrupt preexisting ones. Moreover, imprinted ensembles can be recalled by single- cell stimulation and remain coactive on consecutive days. Our results demonstrate the persistent reconfiguration of cortical circuits by two-photon optogenetics into neuronal ensembles that can perform pattern completion.

Read More

journalclub

Is cortical connectivity optimized for storing information? (2016)

  • Paper Nicolas Brunel,Is cortical connectivity optimized for storing information? Nat. Neuroscience 19, 749–755 (2016)

  • Abstract

Cortical networks are thought to be shaped by experience-dependent synaptic plasticity. Theoretical studies have shown that synaptic plasticity allows a network to store a memory of patterns of activity such that they become attractors of the dynamics of the network. Here we study the properties of the excitatory synaptic connectivity in a network that maximizes the number of stored patterns of activity in a robust fashion. We show that the resulting synaptic connectivity matrix has the following properties: it is sparse, with a large fraction of zero synaptic weights (‘potential’ synapses); bidirectionally coupled pairs of neurons are over-represented in comparison to a random network; and bidirectionally connected pairs have stronger synapses on average than unidirectionally connected pairs. All these features reproduce quantitatively available data on connectivity in cortex. This suggests synaptic connectivity in cortex is optimized to store a large number of attractor states in a robust fashion.

Read More

journalclub

History-dependent variability in population dynamics during evidence accumulation in cortex (2016)

  • Paper Ari S Morcos, Christopher D Harvery, History-dependent variability in population dynamics during evidence accumulation in cortex, Nat. Neuroscience 19, 1672-1681 (2016)

  • Abstract

We studied how the posterior parietal cortex combines new information with ongoing activity dynamics as mice accumulate evidence during a virtual navigation task. Using new methods to analyze population activity on single trials, we found that activity transitioned rapidly between different sets of active neurons. Each event in a trial, whether an evidence cue or a behavioral choice, caused seconds-long modifications to the probabilities that govern how one activity pattern transitions to the next, forming a short-term memory. A sequence of evidence cues triggered a chain of these modifications resulting in a signal for accumulated evidence. Multiple distinguishable activity patterns were possible for the same accumulated evidence because representations of ongoing events were influenced by previous within- and across-trial events. Therefore, evidence accumulation need not require the explicit competition between groups of neurons, as in winner-take-all models, but could instead emerge implicitly from general dynamical properties that instantiate short-term memory.

Read More

news

Interpretable Nonlinear Neural Dynamics Model (NIPS 2016)

Neurons are the fundamental unit of computation of the brain, however, they do not work alone when we perceive a tiger and decide to run away. The fundamental question in systems neuroscience is to understand how neurons interact with each other to generate large-scale dynamics that implements cognitive behavior and support dynamical neurological disease such as Parkinson’s disease at the same time.

Read More

journalclub

Nonequilibrium landscape theory of neural networks (2013)

  • Paper H. Yan, L. Zhao, L. Hu, X. Wang, E.K. Wang, J. Wang., Nonequilibrium landscape theory of neural network, Proc. Natl. Acad. Sci. USA E4185-E4194 (2013).

  • Abstract

The brain map project aims to map out the neuron connections of the human brain. Even with all of the wirings mapped out, the global and physical understandings of the function and behavior are still challenging. Hopfield quantified the learning and memory process of symmetrically connected neural networks globally through equilibrium energy. The energy basins of attractions represent memories, and the memory retrieval dynamics is determined by the energy gradient. However, the realistic neural networks are asymmetrically connected, and oscillations cannot emerge from symmetric neural networks. Here, we developed a nonequilibrium landscape–flux theory for realistic asymmetrically connected neural networks. We uncovered the underlying potential landscape and the associated Lyapunov function for quantifying the global stability and function. We found the dynamics and oscillations in human brains responsible for cognitive processes and physiological rhythm regulations are determined not only by the landscape gradient but also by theflux. We found that the flux is closely related to the degrees of the asymmetric connections in neural networks and is the origin of the neural oscillations. The neural oscillation landscape shows a closed-ring attractor topology. The landscape gradient attracts the network down to the ring. Theflux is responsible for coherent oscillations on the ring. We suggest theflux may provide the driving force for associations among memories. We applied our theory to rapid-eye movement sleep cycle. We identified the key regulation factors for function through global sensitivity analysis of landscape topography against wirings, which are in good agreements with experiments.

Read More

journalclub

Revealing cell assemblies at multiple levels of granularity (2014)

  • Paper Yazan N. Billeh, Michael T. Schaub, Costas A. Anastassiou, Mauricio Barahona, Christof Koch, Revealing cell assemblies at multiple levels of granularity, Journal of Neuroscience Methods, 236, 92-106, (2014)

  • Background

Current neuronal monitoring techniques, such as calcium imaging and multi-electrode arrays, enable recordings of spiking activity from hundreds of neurons simultaneously. Of primary importance in systems neuroscience is the identification of cell assemblies: groups of neurons that cooperate in some form within the recorded population.

  • New method

We introduce a simple, integrated framework for the detection of cell-assemblies from spiking data without a priori assumptions about the size or number of groups present. We define a biophysically-inspired measure to extract a directed functional connectivity matrix between both excitatory and inhibitory neurons based on their spiking history. The resulting network representation is analyzed using the Markov Stability framework, a graph theoretical method for community detection across scales, to reveal groups of neurons that are significantly related in the recorded time-series at different levels of granularity.

  • Results and comparison with existing methods

Using synthetic spike-trains, including simulated data from leaky-integrate-and-fire networks, our method is able to identify important patterns in the data such as hierarchical structure that are missed by other standard methods. We further apply the method to experimental data from retinal ganglion cells of mouse and salamander, in which we identify cell-groups that correspond to known functional types, and to hippocampal recordings from rats exploring a linear track, where we detect place cells with high fidelity.

  • Conclusions

We present a versatile method to detect neural assemblies in spiking data applicable across a spectrum of relevant scales that contributes to understanding spatio-temporal information gathered from systems neuroscience experiments.

Read More

journalclub

Stochastic Backpropagation and Approximate Inference in Deep Generative Models (2014)

  • Paper Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra, Stochastic Backpropagation and Approximate Inference in Deep Generative Models, arXiv:1401.4082 stat.ML

  • Abstract

We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and learning. Our algorithm introduces a recognition model to represent approximate posterior distributions, and that acts as a stochastic encoder of the data. We develop stochastic back-propagation – rules for back-propagation through stochastic variables – and use this to develop an algorithm that allows for joint optimisation of the parameters of both the generative and recognition model. We demonstrate on several real-world data sets that the model generates realistic samples, provides accurate imputations of missing data and is a useful tool for high-dimensional data visualisation.

Read More

journalclub

Spiking neurons can discover predictive features by aggregate-label leaning (2016)

  • Paper R. Gütig, Spiking neurons can discover predictive features by aggregate-label learning, Science, 351, aab4113 (2016). DOI: 10.1126/science.aab4113.

  • Supplementary Material

  • Abstract

The brain routinely discovers sensory clues that predict opportunities or dangers. However, it is unclear how neural learning processes can bridge the typically long delays between sensory clues and behavioral outcomes. Here, I introduce a learning concept, aggregate-label learning, that enables biologically plausible model neurons to solve this temporal credit assignment problem. Aggregate-label learning matches a neuron’s number of outputspikes to a feedback signal that is proportional to the number of clues but carries no information about their timing. Aggregate-label learning outperforms stochastic reinforcement learning at identifying predictive clues and is able to solve unsegmented speech-recognition tasks. Furthermore, it allows unsupervised neural networks to discover reoccurring constellations of sensory features even when they are widely dispersed across space and time.

Read More

journalclub

Reading Out Olfactory Receptors: Feedforward Circuits Detect Odors in Mixtures without Demixing (2016)

  • Paper Alexander Mathis, Dan Rokni, Vikrant Kapoor, Matthias Bethge, Venkatesh N. Murthy, Reading Out Olfactory Receptors: Feedforward Circuits Detect Odors in Mixtures without Demixing, Neuron, 91(5), p1110-1123, 2016

  • Abstract

The olfactory system, like other sensory systems, can detect specific stimuli of interest amidst complex, varying backgrounds. To gain insight into the neural mechanisms underlying this ability, we imaged responses of mouse olfactory bulb glomeruli to mixtures. We used this data to build a model of mixture responses that incorporated nonlinear interactions and trial-to-trial variability and explored potential decoding mechanisms that can mimic mouse performance when given glomerular responses as input. We find that a linear decoder with sparse weights could match mouse performance using just a small subset of the glomeruli (~15). However, when such a decoder is trained only with single odors, it generalizes poorly to mixture stimuli due to nonlinear mixture responses. We show that mice similarly fail to generalize, suggesting that they learn this segregation task discriminatively by adjusting task-specific decision boundaries without taking advantage of a demixed representation of odors.

Read More

journalclub

A Tractable Method for Describing Complex Couplings Between Neurons and Population Rate (2016)

  • Paper Christophe Gardella, Olivier Marre, and Thierry Mora, A tractable method for describing complex couplings between neurons and population rate, eNeuro, DOI: 10.1523/ENEURO.0160-15.2016

  • Abstract

Neurons within a population are strongly correlated, but how to simply capture these correlations is still a matter of debate. Recent studies have shown that the activity of each cell is influenced by the population rate, defined as the summed activity of all neurons in the population. However, an explicit, tractable model for these interactions is still lacking. Here we build a probabilistic model of population activity that reproduces the firing rate of each cell, the distribution of the population rate, and the linear coupling between them. This model is tractable, meaning that its parameters can be learned in a few seconds on a standard computer even for large population recordings. We inferred our model for a population of 160 neurons in the salamander retina. In this population, single-cell firing rates depended in unexpected ways on the population rate. In particular, some cells had a preferred population rate at which they were most likely to fire. These complex dependencies could not be explained by a linear coupling between the cell and the population rate. We designed a more general, still tractable model that could fully account for these non-linear dependencies. We thus provide a simple and computationally tractable way to learn models that reproduce the dependence of each neuron on the population rate.

Read More

journalclub

Training and spontaneous reinforcement of neuronal assemblies by spike timing, Arxiv (2016)

  • Paper Gabriel Koch Ocker, Brent Doiron, Training and spontaneous reinforcement of neuronal assemblies by spike timing, arXiv:1608.00064 (2016), 1-43

  • Related paper: Formation and maintenance of neuronal assemblies through synaptic plasticity, Nat. Comm. 2014

  • Abstract

The synaptic connectivity of cortex is plastic, with experience shaping the ongoing interactions between neurons. Theoretical studies of spike timing–dependent plasticity (STDP) have focused on either just pairs of neurons or large-scale simulations where analytic insight is lacking. A simple account for how fast spike time correlations affect both micro- and macroscopic network structure remains lacking. We develop a low-dimensional mean field theory showing how STDP gives rise to strongly coupled assemblies of neurons with shared stimulus preferences, with the connectivity actively reinforced by spike train correlations during spontaneous dynamics. Furthermore, the stimulus coding by cell assemblies is actively maintained by these internally generated spiking correlations, suggesting a new role for noise correlations in neural coding. Assembly formation has been often associated with firing rate-based plasticity schemes; our theory provides an alternative and complementary framework, where temporal correlations and STDP form and actively maintain learned structure in cortical networks.

Read More

journalclub

Inferring learning rules from distributions of firing rates in cortical neurons, Nature Neuroscience (2015)

  • Paper Lim S., McKee, J. L., Woloszyn, L., Amit, Y., Freedman, D. J., Sheinberg, D. L., and Brunel, N. (2015). Inferring learning rules from distributions of firing rates in cortical neurons. Nat Neurosci http://dx.doi.org/10.1038/nn.4158

  • Abstract

Information about external stimuli is thought to be stored in cortical circuits through experience-dependent modifications of synaptic connectivity. These modifications of network connectivity should lead to changes in neuronal activity as a particular stimulus is repeatedly encountered. Here we ask what plasticity rules are consistent with the differences in the statistics of the visual response to novel and familiar stimuli in inferior temporal cortex, an area underlying visual object recognition. We introduce a method that allows one to infer the dependence of the presumptive learning rule on postsynaptic firing rate, and we show that the inferred learning rule exhibits depression for low postsynaptic rates and potentiation for high rates. The threshold separating depression from potentiation is strongly correlated with both mean and s.d. of the firing rate distribution. Finally, we show that network models implementing a rule extracted from data show stable learning dynamics and lead to sparser representations of stimuli.

Read More

journalclub

Extracting spatial temporal coherent patterns in large scale neural recordings using dynamic mode decomposition, J Neurosci Methods (2016)

  • Paper Bingni W. Brunton ,Lise A. Johnson, Jeffrey G. Ojemann , J. Nathan Kutz, Extracting spatial-temporal coherent patterns in large-scale neural recordings using dynamic mode decomposition, J. Neurosci Methods 258 (2016), 1-15

  • Abstract

There is a broad need in neuroscience to understand and visualize large-scale recordings of neural activity, big data acquired by tens or hundreds of electrodes recording dynamic brain activity over minutes to hours. Such datasets are characterized by coherent patterns across both space and time, yet existing computational methods are typically restricted to analysis either in space or in time separately.

  • New method

Here we report the adaptation of dynamic mode decomposition (DMD), an algorithm originally developed for studying fluid physics, to large-scale neural recordings. DMD is a modal decomposition algorithm that describes high-dimensional dynamic data using coupled spatial–temporal modes. The algorithm is robust to variations in noise and subsampling rate; it scales easily to very large numbers of simultaneously acquired measurements.

  • Results

We first validate the DMD approach on sub-dural electrode array recordings from human subjects performing a known motor task. Next, we combine DMD with unsupervised clustering, developing a novel method to extract spindle networks during sleep. We uncovered several distinct sleep spindle networks identifiable by their stereotypical cortical distribution patterns, frequency, and duration.

  • Comparison with existing methods

DMD is closely related to principal components analysis (PCA) and discrete Fourier transform (DFT). We may think of DMD as a rotation of the low-dimensional PCA space such that each basis vector has coherent dynamics.

  • Conclusions

The resulting analysis combines key features of performing PCA in space and power spectral analysis in time, making it particularly suitable for analyzing large-scale neural recordings.

Read More

journalclub

Decoding subjective decisions from orbitofrontal cortex, Nature Neuroscience 19, 733-980 (2016)

  • Paper Erin Rich and Jonathan Wallis, Decoding subjective decisions from orbitofrontal cortex, Nat. Neuroscience 19, 733-980 (2016)

  • Abstract

When making a subjective choice, the brain must compute a value for each option and compare those values to make a decision. The orbitofrontal cortex (OFC) is critically involved in this process, but the neural mechanisms remain obscure, in part due to limitations in our ability to measure and control the internal deliberations that can alter the dynamics of the decision process. Here we tracked these dynamics by recovering temporally precise neural states from multidimensional data in OFC. During individual choices, OFC alternated between states associated with the value of two available options, with dynamics that predicted whether a subject would decide quickly or vacillate between the two alternatives. Ensembles of value-encoding neurons contributed to these states, with individual neurons shifting activity patterns as the network evaluated each option. Thus, the mechanism of subjective decision-making involves the dynamic activation of OFC states associated with each choice alternative.

Read More

news

Theory Matters

  • What is the role of theory in neuroscience?
  • How can theorists and experimentalists synergize better?
Read More

news

Hiring talents!

CATNIP Lab is looking for awesome people: postdocs, graduate & undergraduate students interested in computational neuroscience and machine learning.

Read More