top of page
CNeuro2021 Faculty
Vijay Balasubramanian

Vijay Balasubramanian

University of Pennsylvania in Philadelphia, Pennsylvania, USA

Vijay Balasubramanian.jpeg

Vijay Balasubramanian is a Cathy and Marc Lasry Professor of Physics at the University of Pennsylvania where he received the Ira H. Abrams Memorial Award for Distinguished Teaching. He was a Penn Fellow in 2012 and during the 2012-2013 academic year he was a Visiting Professor at the École Normale Supérieure (ENS) in Paris, supported by a fellowship from the Fondation Pierre Gilles de Gennes. Currently, Vijay Balasubramanian is a Visiting Professor at the Vrije Universiteit Brussel (VuB) in Belgium. He is also a Member of the Aspen Center for Physics and a Visiting Staff Member at the International Center for Theoretical Physics (ICTP) in Trieste, Italy.

Research Interests: High Energy Physics, String Theory, Biophysics, Neuroscience.

Vijay is interested in how natural systems manipulate and process information, producing new forms of self-organization.

As a theoretical physicist, he pursues questions about the fundamental nature of space and time. He has worked on the the apparent loss of quantum information in the presence of black holes and the origin of entropy and thermodynamics in gravitating systems. He has discussed how the familiar smooth structure of space-time can emerge as a long-distance effective description of more complex underlying physical constructs. He has also shown how some dimensions of space can be regarded as emergent, arising from the quantum entanglement and information structure of an underlying lower-dimensional theory.

As a biophysicist, Vijay pursues these questions primarily in neuroscience. For him, the brain is a statistical computational device and he seeks to uncover the principles that underlie the organization of neural circuits across scales from cells to the whole brain. He has worked on systems in the brain that support many different functions: vision, audition, olfaction, spatial cognition, motor control and decision making. Applying lessons about adaptive molecular sensing from the olfactory system, he has also written about the functional organization of the adaptive immune system in vertebrates and bacteria (CRISPR).
Finally, Vijay Balasubramanian has written on problems in statistical inference and machine learning, and in particular on “Occam’s Razor”, i.e., the tradeoff between simplicity and accuracy in quantitative models. He is interested in this question because all scientific theories involve fitting models to data, and there is a fundamental tradeoff between the complexity of models and their ability to generalize correctly to new situations.  This tradeoff influences how scientists infer models of the world, how machines learn the structure in data, and how living things from the scale of single cells to entire organisms with brains adapt to their environment over timescales from milliseconds to evolutionary time.

CNeuro2021 - Lecture Topics:


 
Basic Session: Principles of Distributed Computation in the Brain

Advanced Session: Imagining Space

Cornelia Bargmann

Cornelia (Cori) Bargmann

Cori Bargmann received her B.S. in biochemistry from the University of Georgia and her Ph.D. from the Massachusetts Institute of Technology. Bargmann began her studies on C. elegans during her post-doc with Bob Horvitz, also at MIT. She joined the University of California, San Francisco as an assistant professor in 1991, and moved to the Rockefeller University in 2004. Bargmann’s lab uses a relatively simple organism, the nematode C. elegans, and its extremely sensitive sense of smell to study how genes regulate neuronal development, function, and behavior. Her work has been recognized with numerous awards including election to the National Academy of Sciences.

Cornelia Bargmann.jpeg

Research Interests: Neural Circuits, Genes, and Behaviour

Genes, the environment, and experience interact to shape an animal’s behavior. Caenorhabditis elegans, a roundworm with just 302 neurons, shows considerable sophistication in its behaviors, and its defined neuronal wiring and genetic accessibility make it an ideal subject in which to study these interactions. Using C. elegans as a model, Bargmann’s laboratory characterizes genes and neural pathways that allow the nervous system to generate flexible behaviors.

CNeuro2021 - Lecture Topic:


 
Special Lecture:
Organizing Behavior across Timescales

 

Dmitri Chklovskii

Dmitri (Mitya) B. Chklovskii

Flatiron Institute at Simons Foundation in New York, USA

Dmitri B. Chklovskii received his Ph.D. degree in theoretical physics from the Massachusetts Institute of Technology, Cambridge. From 1994 to 1997, he was a junior fellow at the Harvard Society of Fellows. He transitioned to neuroscience at the Salk Institute for Biological Studies, San Diego, California. From 1999 to 2007, he was an assistant/associate professor at Cold Spring Harbor Laboratory, New York. Then, as a group leader at Janelia Research Campus, Ashburn, Virginia, he led the team that assembled the largest-at-the-time connectome. He is a group leader for neuroscience at the Flatiron Institute, New York, and a research associate professor at New York University Medical Center. Informed by the function and structure of the brain, his group develops online-learning algorithms for big data.

Dmitri (Mitya) Chklovskii.jpeg

Research Interests: Computational Biology, Imaging, Systems, Cognitive, & Computational Neuroscience.

The goal of Mitya Chklovskii’s research is to reverse engineer the brain on the algorithmic level. Informed by anatomical and physiological neuroscience data, his group develops algorithms that model brain computation and solve machine learning tasks.

CNeuro2021 - Lecture Topics:


 
Basic Session: Deriving Neural Circuits from First Principles

Advanced Session: Similarity Matching Networks

Jujlijana Gjorgjieva

Julijana Gjorgjieva

Max Planck Institute (MPI) for Brain Research in Frankfurt, Germany

Julijana Gjorgjieva studied mathematics at Harvey Mudd College in California, USA and became interested in neuroscience at the University of Cambridge during a course in Computational Neuroscience while doing Part III of the Mathematical Tripos. After obtaining a PhD in Applied Mathematics at the University of Cambridge in 2011 with Stephen Eglen, she spent five years in the USA as a postdoctoral research fellow at Harvard University with Haim Sompolinsky and Markus Meister and Brandeis University with Eve Marder, supported by grants from the Swartz Foundation and the Burroughs-Wellcome Fund. In 2016, she set up an independent research group at the Max Planck Institute for Brain Research in Frankfurt, Germany and became a Professor for Computational Neurosciences at the Technical University of Munich, Germany. She has received an ERC Starting Grant in 2018 for her research on theoretical models of neural circuit organization and computation during postnatal development and is further supported by a Human Frontiers Science Program to study plasticity and evolution of sensory systems under different environmental constraints. She is also a member of the steering committee of the Bernstein Network for Computational Neuroscience and has co-chaired the Bernstein Conference in Computational Neuroscience in 2020 and 2021.

Julijana Gjorgjieva.jpeg

Research Interests: Computation in Neural Circuits.

Using theoretical and computational approaches, my research investigates the emergence of neural circuit organization, and the implications of this organization on circuit computations and function. Specifically, we combine two complementary approaches: First, we build bottom-up mechanistic models to study how non-random connectivity and activity emerge at the level of synaptic inputs on dendritic branches, micro-circuits and different brain regions. We apply these concepts to understand neural circuit development at very early ages right after an animal is born when patterned spontaneous activity guides cellular and synaptic refinements. We also investigate how neural circuit function is maintained after the onset of sensory experience, especially in the presence of perturbations. Second, we apply top-down normative frameworks to study how computation in the context of evolution arises from the goal of a neural system to maximize information transmission about a sensory stimulus subject to relevant biological constraints. One example is the generation of diverse responses in a population of sensory neurons (such as retinal ganglion cells or auditory nerve fibers) as a function of noise and stimulus statistics. Our work is supported by experimental collaborations based on different animal models, from rodent to fruit fly, allowing us to access individual neural circuit components and test modeling predictions.

CNeuro2021 - Lecture Topics:


 
Basic Session: Plasticity & Homeostasis in Neural Circuits

Advanced SessionModels of Developing Neural Circuits at Different Scales

Christian Machens

Christian Machens

Champalimaud Centre for the Unknown in Lisbon, Portugal

Christian Machens.jpeg

Christian Machens studied physics at the University of Tübingen, at SUNY in Stony Brook, and at the Humboldt-University in Berlin. Fascinated by the possibility to apply concepts and tools from physics to the study of the brain, he did his PhD thesis work in computational neuroscience with Andreas Herz at the Humboldt-University in Berlin. In 2002, he moved to Cold Spring Harbor Laboratory where he worked as a postdoctoral fellow with Tony Zador and Carlos Brody. After a brief stint as a junior research group leader at the Ludwig-Maximilians-University in Munich during the year 2006/2007, he was appointed Assistant Professor at the Ecole normale supérieure in Paris in 2007 where he joined the Group for Neural Theory. While dividing his time between Paris and Lisbon, Christian Machens joined the Champalimaud program as a full-time faculty member in September 2011.

Research Interests: Theoretical Neuroscience.

How does the brain work? What are the kind of computations carried out by neural systems? In my lab, we try to address these questions by analyzing recordings of neural activity and constructing mathematical models of neural circuits. Our main goal is to link the activity within various brain areas to a computational theory of animal behavior. We are currently developing methods to summarize the activity of neural populations in useful ways and to compare population activity across areas. In turn, we seek to relate the population activity to behavioral, computational, and mechanistic problems or constraints that organisms are facing. We work in close collaboration with several experimental labs, both within and outside of the CCU.

CNeuro2021 - Lecture Topics:


 
Basic Session: Population Coding & Distributed Representations in the Brain

Advanced Session: A Geometric View on Spiking Networks & Distributed Representations

Kanaka_Rajan.jpg
Kanaka Rajan

Kanaka Rajan

Icahn School of Medicine at Mount Sinai in New York, USA

Kanaka Rajan, Ph.D. is a Computational Neuroscientist and Assistant Professor at the Friedman Brain Institute at the Icahn School of Medicine at Mount Sinai in New York.. Before joining the faculty at Mount Sinai, Kanaka completed her postdoctoral work at Princeton University, where she made significant contributions to the modeling of important neural processes, including feature selectivity with Dr William Bialek and neural network models inspired by biology with Dr David Tank. She received her Ph.D. at Columbia University with Dr. Larry Abbott.

Research Interests: Neurobiology, Biophysics and Artificial Intelligence.

The Rajan Lab brings together the fields of brain research and AI to figure out how the brain works. We use mathematical and computational models based on data collected from neuroscience experiments to design an artificial system that can perform realistic behaviors using only the machinery the biological nervous system has access to (i.e., neurons and synapses operating at a fast timescale). After building these systems, we can then ‘reverse engineer’ them to reveal the operating principles of the real brain.
The resulting integrative theories and models have the potential to transform the way we study the brain, by making specific, quantifiable predictions that lead to new hypotheses about how the brain works. We are currently applying this approach to both healthy brains and to those affected by neuropsychiatric diseases.

CNeuro2021 - Lecture Topics:


 
Basic Session: Method and Logic in Recurrent Neural Network (RNN) Models

Advanced Session: RNNs for Mechanism Discovery in Neuroscience

Eric Shea-Brown

Eric Shea-Brown

University of Washington in Seattle, Washington, USA

Eric Shea-Brown.jpeg

Eric Shea-Brown studied engineering physics at UC- Berkeley and began his research life with a group of wonderful mentors at the Lawrence Livermore National Laboratory, good fortune in this regard that continued in the years to follow. In 2004, he completed his Ph.D. in Princeton's Program in Applied and Computational Mathematics, where was advised by Profs. Phil Holmes and Jonathan Cohen at the interface of dynamical systems and neuroscience. His postdoctoral training was with Prof. John Rinzel at NYU’s Courant Institute and Center for Neural Science, working on mathematical models in cognitive neuroscience and in the dynamics of neural circuits.

Research Interests: Theoretical Neuroscience, Neural Networks, Dynamical Systems.

There is an avalanche of new data on the brain’s activity, revealing the collective dynamics of vast numbers of neurons. In principle, these collective dynamics can be of almost arbitrarily high dimension, with many independent degrees of freedom—and this may reflect powerful capacities for general computing or information. In practice, neural datasets reveal a range of outcomes, including collective dynamics of much lower dimension—and this may reflect other desiderata for neural codes. For what networks does each case occur? We will introduce the underlying concepts from scratch and then discuss two recent sets of contributions to the answer. The first are “bottom-up" mechanistic ideas that link tractable statistical properties of network connectivity with the dimension of the activity that they produce. The second are “top-down” computational ideas that describe how features of connectivity and dynamics that impact dimension arise as networks learn to perform basic tasks.

CNeuro2021 - Lecture Topics:


 
Basic Session: Dimensionality in Neural Networks I

Advanced Session: Dimensionality in Neural Networks II

Each year, the organisers of the CNeuro summer school make every effort to recruit students from a diverse background, including all genders and ethnic groups. Due to the current Covid-19 situation, this course will be held in a virtual format this year.

bottom of page