Skip Navigation

CNS*2018 Workshop: Integrative Theories of Cortical Function

Location: Seattle, USA

CNS*2018 Workshop

Integrative Theories of Cortical Function

July 18, 2018, 9:00-17:30. 

27th Annual Computational Neuroscience Meeting (CNS*2018)
Seattle, USA.

Venue:

Allen Institute Training Room (Rm 670),                                                                       Allen Institute,                                                                                                              615 Westlake Ave N                                                                                                 Seattle, WA 98109

Brief Description:                                                                                                    

The cerebral cortex is a brain region remarkable in similarity of structure between different mammalian species and between different areas in a species. This has led to developments of theories that parts of the cortex perform a similar set of operations, a dictionary of canonical cortical computations. In recent years, several theories for what these operations are have been developed. In concert with the theories multiple models have been developed implementing these proposed computations. This workshop aims to look at what progress has been made in understanding these local computations, how the global cortex functions arise from them, what experimental evidence can be used to differentiate between model, and what are the general integrative principles. We plan to foster a dialogue between theoreticians, experimentalists and modelers.

Organizers:

  • Hamish Meffin, National Vision Research Institute, and Department of Optometry & Visual Science, The University of Melbourne, hmeffin@unimelb.edu.au
  • Stefan Mihalas, Allen Institute for Brain Science, USA, stefanm@alleninstitute.org
  • Anthony Burkitt, Department of Biomedical Engineering, The University of Melbourne, aburkitt@unimelb.edu.au

 


Speakers

 

09:00 -9:45 Tania Pasternak (U Rochester, USA)

Defining a role for prefrontal cortex in memory-guided sensory comparisons

09:45- 10:30 Christof Koch (Allen Institute for Brain Science, USA)

Cortex as the Physical Substrate of Consciousness

10:30-11:00 Coffee break

11:00 -11:45 Subutai Ahmad (VP Research Numenta, USA)

Locations in the neocortex: A Theory of sensorimotor prediction using cortical grid cells

11:45-12:30 Markus Diesmann (Research Centre Jülich , Germany)

Reusable publication of a cortical multi-area model at cellular resolution

12:30- 14:00 Lunch break

14:00-14:45 Anitha Pasupathy (U Washington, USA)

Encoding things and stuff: multiplexed form and texture signals in primate V4

14:45-15:30 Hamish Meffin (U Melbourne, Australia)

The structure of non-linear receptive fields in cat primary visual cortex

15:30-16:00 Coffee Break

16:00 - 16:45 Chang Sub Kim (Chonnam National University, Korea)

Computational implementation of the free energy principle in the brain

16:45- 17:30 Stefan Mihalas (Allen Institute for Brain Science, USA)

Cortical visual systems perform deep integration of context

 

Abstract and Biographies

 

9:00 -9:45 Tania Pasternak (U Rochester, USA)

Defining a role for prefrontal cortex in memory-guided sensory comparisons

Abstract: To perform a ubiquitous task of comparing sensory stimuli across time and/or space, subjects must identify these stimuli, retain them in memory and retrieve them at the time of comparison. Thus, the neuronal circuitry underlying such tasks must involve cortical regions sub-serving sensory processing, maintenance, attention and decision-making. In our work, we have been examining the neural substrates of memory-guided comparisons of visual motion, with the focus on two reciprocally interconnected regions, the lateral prefrontal cortex (LPFC) and the motion processing area MT. We have characterized the activity in both areas during motion comparison tasks, identifying signals in the LPFC likely to represent bottom-up motion information supplied by MT and signals in area MT likely to represent the top-down influences from the LPFC. I will discuss the evidence that the content of task-related activity in MT and LPFC is a product of continuous interactions between neurons in the two areas during which they process and exchange signals generated during each stage of memory-guided comparisons of visual motion.

Bio: My research program is aimed at examining cortical circuitry underlying successful execution of memory guided comparison tasks involving visual motion. We record spiking and LFP activity with single and multi-laminar electrodes from the lateral prefrontal cortex (LPFC) implicated in executive function, sensory working memory and attention and from area MT with well established role in the analysis of visual motion. By focusing on the sensory feature with well understood neural coding we are able to accurately track sensory representations in both cortical regions during sensory, maintenance, comparison and decision stages of the task, relate it to perceptual decisions as well as examine the role of its
cognitive top-down signals and in memory-guided sensory comparisons. This work revealed the importance of the continuous interactions between prefrontal and sensory neurons processing visual motion, the interaction that we are currently exploring more directly by shifting in our work from single cell recordings and cell-by-cell analysis of firing rates to monitoring activity of simultaneously recorded neurons across the MT-LPFC network on a trial-by-trial basis. This will allow us is to reveal dynamic encoding strategies underlying the neural basis of visual cognition and working memory.

 

9:45- 10:30 Christof Koch (Allen Institute for Brain Science, USA)

Cortex as the Physical Substrate of Consciousness

Abstract: Human and non-human animals not only act in the world but are capable of conscious experience. That is, it feels like something to have a brain and be cold, angry or see red. I will discuss the empirical progress that has been achieved over the past several decades in locating the footprints of consciousness to the posterior part of cortex, in the back of the brain.
I will introduce Integrated Information Theory. IIT explains in a principled manner which physical systems are capable of conscious, subjective experience. The theory explains many biological and medical facts about consciousness and has been used to build a consciousness-meter to assess the presence of consciousness in neurological patients by probing cortex. IIT also predicts that consciousness is much more widespread in biology than conventionally assumed, that a silent cortex may give rise to experience and that digital computers cannot be conscious, even if they were to perfectly simulate a human brain. Consciousness does not arise as a form of computation but as a causal power.

 

11:00 -11:45 Subutai Ahmad (VP Research Numenta, USA)

Locations in the neocortex: A Theory of sensorimotor prediction using cortical grid cells

Abstract: The neocortex is capable of modeling complex objects through sensorimotor interaction but the neural mechanisms are poorly understood. In the entorhinal cortex grid cells represent the location of an animal in its environment, and this location is updated through movement and path integration. In this talk, we propose that grid-like cells in the neocortex represent the location of sensors on an object. We describe a two-layer model that uses cortical grid cells and path integration to robustly learn and recognize objects through movement. In our model, a layer of grid-like cells provide a location signal such that features can be associated with a specific location in the reference frame of each object. Reciprocal feedback connections to a sensory layer invoke previously learned locations consistent with recent sensory input, and form predictions for future sensory input based on upcoming movements. Simulations show that the model can learn thousands of objects with high noise tolerance. We discuss the relationship to cortical circuitry, and suggest that the reciprocal connections between layers 4 and 6 fit the requirements of the model. We propose that the subgranular layers of cortical columns employ grid cell like mechanisms to represent object specific locations that are updated through movement.

Bio: Subutai Ahmad is the VP of Research at Numenta, with experience in computational neuroscience, deep learning, and real time computer vision. His current research interests are focused on creating detailed theories of cortical columns and layers. His recent work includes a model showing how a layer of pyramidal neurons can learn complex sequences, and a model showing how two cortical layers and multiple cortical columns can cooperate to robustly recognize objects in the context of movement. 

 

11:45-12:30 Markus Diesmann (Research Centre Jülich , Germany)

Reusable publication of a cortical multi-area model at cellular resolution 

Abstract: (1) Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre. (2) Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University. (3) Department of Physics, Faculty 1, RWTH Aachen University

Cortical architecture, i.e. the area-specific cellular and laminar composition of the cortical network, is related to the connectivity between areas, which forms a hierarchical and recurrent network at the brain scale. Based on earlier work on the cortical microcircuit, our recent work [1,2] integrates data on cortical architecture and axonal tracing data into a multi-scale framework describing one hemisphere of macaque vision-related cortex. We represent each area by the network below one square millimeter of cortical surface. Since downscaling inevitably distorts neural network dynamics [3], these circuits are modeled with their natural number of neurons and synapses. Simulations confirm a realistic activity regime after adjustments of the connectivity within the margins of error [4] with the help of mean-field theory. At a sufficiently strong coupling between the areas, spike patterns, the distribution of spike rates, and the power spectrum of the activity are compatible with in-vivo resting-state data. Furthermore, the matrix of correlations between the activities of areas is as similar to the experimentally measured functional connectivity of resting-state fMRI as can be expected based on inter-individual differences. This correspondence on multiple spatial scales is achieved in a metastable state exhibiting time scales much larger than any time constant of the system. Granger causality analysis at the level of the neural populations reveals that both dense and sparser connections can be dynamically relevant, but that the sparsest connections are not influential. Further, the model provides predictions about the laminar patterns of the structurally and dynamically strongest connections in the feedforward, lateral, and feedback directions.

While many anatomical and dynamical aspects of the brain are still unknown, such models integrate the available data, constitute testbeds for theories of brain function, and serve as research platforms for iterative improvements and building blocks for further studies. There are, however, challenges to overcome.

Reproducibility and reusability of a brain-scale model at the resolution of neurons and synapses requires that researchers have access to efficient simulation code and computational resources. Therefore progress in the construction of models needs to be accompanied by corresponding progress in simulation technology. There is a technological barrier for the further increase of network size as potentially enabled by the memory and compute power offered by exascale systems. Present simulation code distributes the activity of all neurons to all compute nodes and only subsequently filters out the locally required information. This strategy becomes unfeasible in terms of memory for networks larger than 1 billion neurons. The number of incoming connections to a neuron in the cerebral cortex is, however, limited to the order of 10,000. Therefore, at the brain scale, connectivity is extremely sparse. Recent work takes  account of this sparseness and introduces a two-tier connection infrastructure together with directed communication among compute nodes [5]. This novel technology removes the scaling of local memory consumption with network size and disburdens compute nodes from the need to filter the incoming data. Although the new algorithms and data structures address exascale computers, they do not sacrifice performance on small systems and exhibit substantial performance gains already at the petascale. The technology is presently being integrated into the next release of the NEST simulation code. Nevertheless, it remains unclear whether conventional computers will ever be fast enough for studies of plasticity and learning at the brain scale. The idea of neuromorphic computing offers an alternative. The SpiNNaker hardware system now made a breakthrough, in simulating  a cortical microcircuit model at the full density of synapses [6].

Simulating brain-scale networks at the microscopic level does not necessarily create understanding. Often abstractions in terms of mean-field models are required to expose the essential mechanims. Reversely, the study on the multi-area model shows that mean-field approximations support the exploration of parameter ranges for microscopic simulations. Ultimately it will be useful to simulate different parts of a network at different levels of description to focus resolution on where it is needed. For this reason NEST has been extended by the capability to integrate rate-based models [7].

Models have reached such a complexity that only executable model descriptions enable effective communication between scientists and reproducibility of results. Furthermore, the information required to instantiate a model in the memory of a computer is only one aspect of the modeling process. The experimental data entering the model span multiple scales and come from different sources. Algorithms are required to collate the data and derive the final model parameters. Often, data are only partially available such that quantitative hypotheses need to be formulated to bridge the gaps. As a consequence researchers can only add new data to the model or modify assumptions if they have access to the construction process. Therefore, the workflow of data integration also needs to be documented in an executable format. Borrowing techniques from computer science we demonstrate on the example of our multi-area model the development of a publishable executable workflow of model construction and discuss the difficulties we encountered in the process. We decide on GitHub as the platform for the review of model code and subsequent dissemination [8]. In this way workflow and model are open to future enhancements using features like issue tickets and pull requests. Within this platform, Snakemake [9] expresses the entire workflow from the underlying experimental data to the reproduction of the published figures.

The open development of NEST is guided by the NEST Initiative. Partial funding comes from the Human Brain Project through EU grants 604102, 720270, and 78590; and from the German Research Council (DFG grant SPP 2041). Use of the JUQUEEN supercomputer in Jülich was made possible by the JARA-HPC Vergabegremium and provided on the JARA-HPC Partition (VSR computation time grant JINB33).

[1] Schmidt M, Bakker R, Hilgetag CC, Diesmann M, van Albada SJ (2018) Brain Struct Func 223(3):1409-1435

[2] Schmidt M, Bakker R, Shen K, Bezgin G, Hilgetag CC, Diesmann M, van Albada SJ (2016) arXiv:1511.09364

[3] van Albada SJ, Helias M, Diesmann M (2015) PLOS CB 11(9):e1004490

[4] Schuecker J, Schmidt M, van Albada SJ, Diesmann M, Helias M (2017) PLOS Comput Biol 13:e1005179

[5] Jordan J, Ippen T, Helias M, Kitayama I, Mitsuhisa S, Igarashi J, Diesmann M, Kunkel S (2018) Front Neuroinformatics 12:2

[6] van Albada SJ, Rowley AG, Senk J, Hopkins M, Schmidt M, Stokes AB, Lester DR, Diesmann M, Furber SB (2018) Front Neurosci 12:291

[7] Hahne J, Dahmen D, Schuecker J, Frommer A, Bolten M, Helias M, Diesmann M (2017) Front Neuroinformatics 11:34

[8] https://github.com/INM-6/multi-area-model

[9] Köster J, Rahmann S (2012) Bioinformatics 28: 2520-2522

Bio: Prof. Dr. Markus Diesmann is director of the Institute of Neuroscience and Medicine (INM-6, Computational and Systems Neuroscience), director of the Institute for Advanced Simulation (IAS-6, Theoretical Neuroscience) and director of the JARA-Institute Brain structure-function relationships (INM-10) at Jülich Research Centre, Germany. He is also full professor in Computational Neuroscience at the School of Medicine, RWTH University Aachen, Germany and affiliated with the Department of Physics of the same university. Prof. Diesmann studied physics at Ruhr University Bochum with a year of Cognitive Science at University of Sussex, UK. He carried out his PhD studies at Weizmann Institute of Science, Rehovot, Israel, and Albert-Ludwigs-University Freiburg. In 2002 he received his PhD degree from the Faculty of Physics, Ruhr-University Bochum, Germany. From 1999 Prof. Markus Diesmann worked as senior staff at Department of Nonlinear Dynamics, Max-Planck-Institute for Dynamics and Self-Organization, Göttingen, Germany. In 2003 he became assistant professor of Computational Neurophysics at Albert-Ludwigs-University, Freiburg, Germany before in 2006 joining the RIKEN Brain Science Institute, Wako City, Japan as a unit leader and later team leader. In 2011 Markus Diesmann moved to Jülich. His main scientific interests include the correlation structure of neuronal networks, models of cortical networks, simulation technology and supercomputing. He is one of the original authors of the NEST simulation code and a member of the steering committee of the NEST Initiative.

 

14:00-14:45 Anitha Pasupathy (U Washington, USA)

Encoding things and stuff: multiplexed form and texture signals in primate V4

Abstract: I am interested in understanding how midlevel processing stages of the primate ventral visual pathway encode visual stimuli and how these representations might underlie our ability to segment visual scenes and recognize objects. Our primary focus is area V4. In my talk, I will present results from two recent experiments that demonstrate that many V4 neurons jointly encode both the shape and surface texture of visual stimuli. I will describe our efforts to develop image-computable models to explain how these properties might arise and discuss why this coding strategy may be advantageous for segmentation in natural scenes. 

 

14:45-15:30 Hamish Meffin (U Melbourne, Australia)

The structure of non-linear receptive fields in cat primary visual cortex

Abstract: Information processing in the brain is frequently non-linear.  In sensory systems, one approach to understanding such non-linear processing is to present a large ensemble of stimuli, record spiking responses and then use these data to estimate a model for the stimulus-response relationship. Typically this has been done using models consisting of a cascade of linear filters applied to the stimulus, which models synaptic integration, followed by static non-linearities that model neural spiking (e.g. the linear non-linear model).  The filters are selective for particular features in the stimuli. Many cells use more than one filter to process combinations of features in a non-linear fashion.  However, estimating such higher dimensional non-linearities is challenging due to the potentially large number of parameters required, and has typically been done using only stereotyped non-linearities with few parameters that do not necessarily accord with biological reality (e.g. quadratics).

A prominent example is complex cells in primary visual cortex, whose responses are selective for the orientation and spatial frequency of a drifting grating but are relatively invariant to its spatial phase. In the classic “energy model” of these cells, a quadrature pair of filters is used that are matched in orientation and spatial frequency but shifted in spatial phase relative to each other.  The outputs of the filters are combined using a sum of quadratic non-linearities to give spatial phase invariance.

Here we study complex-like cells in the primary visual cortex of the cat and employ the nonlinear input model (NIM) to estimate the non-linear receptive field structures of these neurons in response to white Gaussian noise stimuli. This model has the capacity to fit a very general class of non-linearities for neurons that are sensitive to multiple features, including, but not restricted to quadratic non-linearities employed by the energy model.  

We found that cells in primary visual cortex combine spatial features in more diverse ways than expected from the energy model. While pairs of filters were often approximately matched in orientation or spatial frequency, many other cells exhibited a large mismatch. Other characteristics of receptive field filters, such as bandwidth of orientation or spatial frequency tuning, showed even greater variability in the degree of mismatch within a cell.  Further, the non-linearities associated with the output of each filter also exhibited a diversity of properties. While some had approximately even symmetry, similar to the quadratic non-linearities of the energy model required for spatial phase invariance, most had other forms. The most frequent was a threshold-type non-linearity below which spiking was minimal, and above which spike rate increased monotonically. This form of non-linearity resulted in responses that departed markedly from phase invariance.

These results emphasise that complex-like cells in cat primary visual cortex combine a diversity of spatial features through a range of non-linear operations. This diversity goes beyond what is expected from the energy model, in which spatial features matched in orientation and spatial frequency characteristics, but differing in spatial phase are combined though quadratic summation.

Bio: Dr. Hamish Meffin is trained in mathematics, physics and neuroscience.  For over fifteen years he has worked in theoretic and experimental neuroscience  in cross-disciplinary institutions such as the Bionic Ear Institute, Australia, and the Bernstein Center for Computational Neuroscience, Germany, and the National Vision Research Institute, Univeristy of Melbourne, Australia.  His research involves two main themes: 1) combining theoretical and experimental approaches to understanding how neural circuits in the brain give rise to visual perception, and 2) the development a bionic eye to restore vision to people with degenerate diseases of the retina.

 

16:00 - 16:45 Chang Sub Kim (Chonnam National University, Korea)

Computational implementation of the free energy principle in the brain

Abstract: The Free energy principle (FEP) in the neurosciences suggests that all viable organisms perceive and act on the external world by instantiating a probabilistic causal model embodied in their brain in a manner that ensures their adaptive fitness [1]. The biological mechanism that endows the organism's brain with the ability to implement the FEP is theoretically framed into an information-theoretic measure, which is the informational free energy (IFE). According to the FEP, a living system attempts to minimize the IFE, a proxy for surprisal, when exposed to environmental perturbations by calling on active inference. The recognition dynamics (RD) carries out the computation of minimizing the IFE in the brain, which emulates generalized Bayesian filtering [2] and is akin to the predictive coding schemes [3].
In this talk, I will present a technical overview of the FEP including a simple example of an agent-based model for its application, resting on reference [4].
Then, I will describe a reformulation of the RD, which recast the FEP by proposing the IFE as an informational Lagrangian of the brain. Subsequently, I invoke the principle of least action [5] to construct the Hamiltonian mechanics for the RD [6]. In the conventional formulation, one employs the gradient descent method and executes the minimization of the IFE at each point in time in the state space comprising the generalized coordinates. The generalized coordinates of motion are a non-Newtonian construct of the infinitely recursive time derivatives of the continuous states of an organism's environment and brain. In the proposed scheme, the minimization is performed on time integral of the IFE over an organism's temporal horizon in its natural habitat. Consequently, the notion of generalized motion is eschewed, and the dynamical state of the brain is determined only by brain variables and their first-order derivatives of neural substrates, thereby dismissing the arbitrariness and ambiguity in the conventional assumption. Furthermore, the present theory delivers a natural account of the general structure of asymmetric message passing, namely descending predictions and ascending prediction errors, in the brain's hierarchical architecture [7]. Finally, I will discuss how the proposed formulation may be implemented in the biophysical brain.

[1] Friston, K. (2010). The free-energy principle: a unified brain theory? Nature Reviews. Neuroscience 11, 127-138.

[2] Friston, K. J. (2008). Variational Filtering. NeuroImage 41, 747-766; Friston. K., Stephan, K., Li, B., & Daunizeau, J. (2010). Generalized Filtering. Mathematical Problems in Engineering, 261670.

[3] See, for instance, Spratling, M. (2008). Reconciling predictive coding and biased competition models of cortical function. Frontiers in Computational Neuroscience 2, 1-8.

[4] Buckley, C. L., Kim, C. S., McGregor, S., & Seth, A. K. (2017). The free energy principle for action and perception: A mathematical review. Journal of Mathematical Psychology 81, 55-79. http://dx.doi.org/10.1016/j.jmp.2017.09.004.

[5] Landau, L. P. & Lifshitz, E. M. (1976). Classical Mechanics. (3rd ed.) Amsterdam: Elsevier Ltd.

[6] Kim, C. S. (2018). Recognition Dynamics in the Brain under the Free Energy Principle. Neural Computation, to appear. https://arxiv.org/abs/1710.09118.

[7] Markov, N. T. & Kennedy, H. (2013). The importance of being hierarchical. Current Opinion in Neurobiology 23, 187-195.

Bio: Prof. Chang Sub Kim received an undergraduate education from Seoul National University in Physics in Korea and earned his Ph. D. in 1989 in condensed matter theory from the graduate school at the University of Florida, Gainesville, the U.S.A. After a postdoctoral experience at the University of British Columbia in Canada, he joined the physics faculty at Chonnam National University in Korea in 1990. He has been a full professor since 2001 and developed an active academic career. His research interest spans a broad spectrum of condensed matter problems, which includes nonequilibrium statistical mechanics, the quantum many-body theory of solids, and theoretical optics. Lately, he endeavors on dissipative quantum dynamics, the long-standing issue of theoretical formulation of the entropy principle, and the physical theories that undergird the brain's computation.

 

16:45- 17:30 Stefan Mihalas (Allen Institute for Brain Science, USA)

Cortical visual systems perform deep integration of context

Abstract: Deep neural network have been inspired by biological networks. Convolutional neural network, a frequently used form of deep networks, have had great success in many real-world applications and have been used to model visual processing in the brain. However, these networks require large amounts of labeled data to train and are quite brittle: for example, small changes in the input image can dramatically change the network's output prediction. In contrast to what is known from biology, these networks rely on feedforward connections, largely ignoring the influence of recurrent connections.
In this study we construct deep neural networks which make use of knowledge of local circuits, and test some predictions of the network against observed data. For the local circuit, we used a model based on the assumption that the lateral connections of neurons implement optimal integration of context. The optimal computations require more complex neurons, but they can be approximated by a standard artificial neuron. We tested this hypothesis using natural scene statistics and mouse v1 recordings which allows us to construct a parameter-free model for lateral connections. The optimal structure matches the observed structure (like-to-like pyramidal connectivity and distance dependence of connections) better than receptive field correlation models.
Subsequently we integrated these local circuits in traditional convolutional neural networks. Models with optimal lateral connections are more robust to noise and achieve better performance on noisy versions of the MNIST and CIFAR-10 datasets. These models also reproducesalient features of observed neuronal recordings: e.g. positive signal and noise correlations. Our results demonstrate the usefulness of combining knowledge of local circuits with machine learning techniques in real-world vision tasks and studying cortical computations.

Bio: Stefan Mihalas is an Assistant Investigator at Allen Institute for Brain Science, aiming to characterize the computational repertoire of the cortical microcircuits, and how such computations can be put together to describe the function of the cortical visual system in the mouse. He is also working on structure-driven models of cortical systems and data analysis needed to constrain such models (e.g. constructing a database of single neuron models, mesoscopic connectivity).

 

 

< Back