
Theoretical and Computational Neuroscience Summer School
CNeuro2026 Faculty
Rava Azeredo da Silveira
Institute of Molecular & Clinical Ophthalmology in Basel, Switzerland and University of Zürich, Switzerland

Rava Azeredo da Silveira’s lab focuses on a range of topics in theoretical and computational neuroscience and cognitive science. These topics, however, are tied together through a central question: How does the brain represent and manipulate information?
Research Interests: ​Computational Neuroscience & Neurotechnology, Deciphering (Patho-) Physiological Mechanisms, Visual System.
Among the more concrete approaches to this question, the lab analyzes and models neural activity in circuits that can be identified, recorded from, and perturbed experimentally. On a more abstract level, the lab investigates the representation of information in populations of neurons, from a statistical and algorithmic — rather than mechanistic — point of view, through theories of coding and data analyses. In the context of cognitive studies, the lab investigates mental processes such as inference, learning, and decision-making, through both theoretical developments and behavioral experiments. A particular focus is the study of neural constraints and limitations and, further, their impact on mental processes.
CNeuro2026 - Lecture Topics:
Basic Lecture: to be announced
Advanced Lecture: to be announced
Nathaniel Daw
Nathaniel Daw is a Professor of Neural Science and Psychology at Princeton University. He received his Ph.D. in computer science from Carnegie Mellon University and at the Center for the Neural Basis of Cognition, before conducting postdoctoral research at the Gatsby Computational Neuroscience Unit at UCL. His research concerns computational approaches to reinforcement learning and decision making, and particularly the application of computational models in the laboratory, to the design of experiments and the analysis of behavioral and neural data. He is the recipient of a McKnight Scholar Award, a NARSAD Young Investigator Award, a Scholar Award in Understanding Human Cognition from the MacDonnell Foundation, and the Young Investigator Award from the Society for Neuroeconomics.

Research Interests: Computational Models, Neuroscientific Experiments, Decision-Making Systems, Learning and Neuromodulation.
I study how people and animals learn from trial and error (and from rewards and punishments) to make decisions, combining computational, economic, neural, and behavioral perspectives. I focus on understanding how subjects cope with computationally demanding decision situations, notably choice under uncertainty and in tasks (such as mazes or chess) requiring many decisions to be made sequentially. In engineering, these are the key problems motivating reinforcement learning and Bayesian decision theory. I am particularly interested in using these computational frameworks as a basis for analyzing and understanding biological decision making. Some ongoing projects include:
Computational models in neuroscientific experiments: Computational models (such as reinforcement learning algorithms) are more than cartoons: they can provide detailed trial-by-trial hypotheses about how subjects might approach tasks such as decision making. By fitting such models to behavioral and neural data, and comparing different candidates, we can understand in detail the processes underlying subjects’ choices. I am interested in developing new techniques for such analyses, and applying them in behavioral and functional imaging experiments to study human decision making.
Interactions between multiple decision-making systems: The idea that the brain contains multiple, separate decision systems is as ubiquitous (in psychology, neuroscience, and even behavioral economics) as it is bizarre. For instance, much evidence points to competition between more cognitive and more automatic processes associated with different brain systems. Such competition has often been implicated in self-control issues such as drug addiction. But (as these examples suggest) having multiple solutions to the problem of making decisions actually compounds the decision problem, by requiring the brain to arbitrate between the systems. We are pursuing this arbitration using a combination of computational and experimental methods.
Learning and neuromodulation: Much evidence has amassed for the idea that the neuromodulator dopamine serves as a teaching signal for reinforcement learning. This relatively good characterization can now provide a foothold for extending in a number of exciting new directions. These include computational (e.g., how can this system balance the need to explore unfamiliar options versus exploit old favorites), behavioral (how is dopaminergically mediated learning manifest; how is it deficient in pathologies such as drug addiction or Parkinson’s disease), and neural (what is the contribution of systems that interact with dopamine, such as serotonin and the prefrontal cortex).
CNeuro2026 - Lecture Topic:
Basic Lecture: to be announced
Advanced Lecture: to be announced

Michael Häusser
The University of Hong Kong, China
Professor Häusser has joined the School of Biomedical Sciences as Director of School of Biomedical Sciences and Interim Director of the School of Biomedical Engineering.
Professor Häusser obtained a BSc in Physiology from the University of British Columbia and a DPhil in Physiological Sciences from the University of Oxford. He did postdoctoral work at the Max-Planck-Institute for Medical Research in Heidelberg (with Nobel Laureate Bert Sakmann), at Bell Labs in New Jersey and at the École Normale Supérieure in Paris before starting his own laboratory at University College London. At UCL, he held the post of Professor of Neuroscience and Wellcome Trust Principal Research Fellow since 2011. In 2017 he helped to found the International Brain Laboratory, a global collaboration of experimental and theoretical laboratories working together to understand the brain-wide mechanisms of decision-making.
Research Interests: Dendrites, Biological Neural Networks and Artificial Neural Networks.
Michael Häusser has made fundamental contributions to our understanding of how the complex dendritic structures of nerve cells contribute to the functional computations that occur in the mammalian brain. He has achieved this by the introduction and exploitation of advanced techniques, coupled with careful quantitative analysis and modelling of the experimental results. His most distinctive contribution has been to illuminate how non-linear mechanisms in neuronal dendrites contribute to the complex behaviour and plasticity of nerve networks in the brain.
CNeuro2026 - Lecture Topics:
Basic Lecture: to be announced
Advanced Lecture: to be announced
Sukbin Lim
Sukbin Lim is an Assistant Professor of Neural Science at NYU Shanghai and a Global Network Assistant Professor at NYU. Prior to joining NYU Shanghai, she was a postdoctoral researcher at University of California, Davis and University of Chicago. She holds a PhD from NYU and a BS from Seoul National University.
​
Research Interests: Modeling and Analysis of Neuronal Systems, Computational Neuroscience, Learning and memory, Network interactions, Dynamical systems.

Utilizing a broad spectrum of dynamical systems theory, the theory of stochastic processes, and information and control theories, Professor Lim develops and analyzes neural network models and synaptic plasticity rules for learning and memory. It accompanies the analysis of neural data and collaboration with experimentalists to provide and test biologically plausible models.
CNeuro2026 - Lecture Topic:
Basic Lecture: to be announced
Advanced Lecture: to be announced

Aravinthan Samuel
Harvard University, Massachusetts, United States of America
Aravinthan D. T. Samuel is a professor at Harvard University. He is also known as a triple Harvard alumnus: he earned a BA in Physics, completed a PhD in Biophysics, and later carried out postdoctoral research training in neuroscience at Harvard.
Samuel has built his career at the intersection of physics and brain science, using quantitative methods to explore how living systems behave and how brains convert information into action. He is widely recognized for research that links biology, computation, and neuroscience, and for supporting efforts that make complex brain research easier to carry out at scale.
Research Interests: Neuroscience and Biophysics.
Freely-behaving animals constantly transform sensory inputs into internal representations, memories, and purposeful behavioral outputs. To do this, they use algorithms and circuits. Larger animals use brain circuits . Single-celled organisms use biochemical circuits and signal-transduction pathways.
To make progress, we use accessible biophysical models of organism behavior that can be studied from sensory input to motor output. We study bacterial chemotaxis using E. coli. We study navigational behaviors including chemotaxis, thermotaxis, and mating behaviors in the nematode C. elegans. We study thermosensory and olfactory behaviors in the Drosophila larva.
In all of our studies, we apply expertise in optics and light microscopy. We build microscopes that allow us to manipulate and monitor the circuits that underlie behavior in freely-moving organisms. We use advanced high-throughput electron microscopy to map entire brain circuits at synaptic resolution.
CNeuro2026 - Lecture Topics:
Basic Lecture: to be announced
Advanced Lecture: to be announced
Haim Sompolinsky
Haim Sompolinsky earned his PhD in Physics from Bar-Ilan University, Israel. Currently, he holds positions as Professor of Physics and Neuroscience (Emeritus) at Hebrew University, Israel, and as Professor of Molecular and Cellular Biology and of Physics (in Residence) at Harvard University, USA.

Research Interests: ​Neuroscience, Artificial Intelligence and Physics.
Sompolinsky’s research goal is to uncover the fundamental principles of the organization, the dynamics and the function of the brain, viewing the brain through multiscale lenses, spanning the molecular, the cellular, and the circuit levels. To achieve this goal, Sompolinsky has developed new theoretical approaches to computational neuroscience based on the principles and methods of statistical physics, and physics of dynamical and stochastic systems. This new field, Neurophysics, builds in part on Sompolinsky’s earlier work on critical phenomena, random systems, spin glasses, and chaos. His research areas cover theoretical and computational investigations of cortical dynamics, sensory processing, motor control, neuronal population coding, long and short-term memory, and neural learning. The highlights of his research include theories and models of local cortical circuits, visual cortex, associative memory, statistical mechanics of learning, chaos and excitation-inhibition balance in neuronal networks, principles of neural population codes, statistical mechanics of compressed sensing and sparse coding in neuronal systems, and the Tempotron model of spike time based neural learning. He also studies the neuronal mechanisms of volition and the impact of physics and neuroscience on the foundations of human freedom and agency.
CNeuro2026 - Lecture Topic:
Basic Lecture: to be announced
Advanced Lecture: to be announced
Sen Song

Sen Song is a Principal Investigator at Department of Biomedical Engineering, School of Medicine, and McGovern Institute for Brain Research.
He received PhD degree in computational neuroscience from Brandeis University in 2002, and completed postdoctoral research at Cold Spring Harbor Laboratory and Massachusetts Institute of Technology. In 2010, he joined the Department of Biomedical Engineering at Tsinghua University. His main research interest is in interdisciplinary research in computational neuroscience, artificial intelligence. He also has considerable experience in research on neural circuits, bioinformatics, and genomics. His representative work includes the theoretical study on the spike-timing-dependent plasticity (STDP) published in Nature Neuroscience and Neuron, and the study on the motif analysis of local brain circuits published in PLos Biology. In recent years, his lab has also conducted a series of work on neural circuits underlying emotion and motivation, which are published in Cell Reports, Journal of Neuroscience and others. In the field of artificial intelligence, he is interested in applying deep learning to the analysis of brain imaging data and healthcare and education. In the 2017 Kaggle Datascience Bowl International Competition, students from the laboratory, in collaboration with the laboratory of Xiaolin Hu from the CS department, stood out from thousands of teams and won the first place, developing a lung cancer prediction algorithm based on CT images.
Research Interests: Brain-Scale Cognitive Models, Modeling and Analysis of Complex Temporal and Spatial Data and.Neural Circuits.
Sen is interested in how neural circuits carry out computations. In collaboration with experts in chip design, he is interested in applying such insights to building systems for neuromorphic computing. He also works on deciphering the neural circuits underlying emotion and motivation using optogenetics in rodent models.
CNeuro2026 - Lecture Topic:
Basic Lecture: to be announced
Advanced Lecture: to be announced
Louis Tao
Peking University, China
Louis was transplanted from Taipei to New York at an early age and had dreams of becoming an astrophysicist. Later on, after two degrees and two postdocs in Physics, he found computational neuroscience to be his true calling.
Research Interests: ​Mathematical Modeling of Sensory Systems, Neuronal Networks, Neural Systems.

Most recently he has worked on modeling primary visual cortex, theoretical aspects of neuronal population dynamics, information transfer and processing in neural circuits, neuromorphic computations, and live, optical imaging of C. elegans behavior and its underlying neural circuits.
CNeuro2026 - Lecture Topics:
Basic Lecture: to be announced
Advanced Lecture: to be announced
Taro Toyoizumi
RIKEN Institute, Japan

Taro Toyoizumi is the team leader at the RIKEN Center for Brain Science. He received his BSc in physics from the Tokyo Institute of Technology in 2001, followed by his MSc and PhD in computational neuroscience from the University of Tokyo in 2003 and 2006, respectively. After completing his doctoral studies, he joined the Center for Theoretical Neuroscience at Columbia University as a JSPS and Patterson Trust Postdoctoral Fellow. In 2010, Toyoizumi was appointed as a special postdoctoral researcher at the RIKEN Brain Science Institute and was promoted to laboratory head the following year. He undertook his current position at RIKEN in 2018 and was also named an adjunct professor at the Graduate School of Information Science and Technology at the University of Tokyo in 2019. He has served as a Co-Editor-in-Chief of Neural Networks from 2022. Taro Toyoizumi's research focuses on the computational principles underlying the experience-based organisation of neural circuits.
Research Interests: Theoretical and Computational Neuroscience, Information Theory, Statistical Physics.
Our research is within the field of Computational Neuroscience. We utilize computer models to explore how information is processed in the brain and how the brain circuits adapt to the environment. Drawing on analytical techniques from statistical physics and information theory,
we probe the key functional properties of neural circuits. We apply these techniques to distill diverse experimental findings into a few core concepts.
We hold a particular interest in activity-dependent forms of plasticity in the brain, which exert significant influences on learning, memory, and development. Leveraging mathematical models, we aim to formulate a theory that bridges cellular-level plasticity rules and computation. The capacity of neurons to represent and retain information is gauged from the structure and behavior of resulting circuits.
CNeuro2026 - Lecture Topics:
Basic Lecture: to be announced
Advanced Lecture: to be announced
Xiaoqin Wang
Tsinghua University, Beijing, China
Xiaoqin’s research is in the areas of auditory neuroscience and neural engineering. His work has focused on the understanding of the structure and functions of the auditory cortex and the neural basis of vocal communication. His laboratory has developed a unique experimental model, a highly vocal New World primate - the common marmoset (Callithrix jacchus). Using this model system, Dr. Wang’s lab has systematically studied neural coding properties of the auditory cortex in awake and behaving conditions. This work has revealed specialized cortical representations of complex sound features such as pitch and harmonicity, and discovered neural mechanisms involved in vocal feedback control and self-monitoring during speaking.

Research Interests: ​Auditory Neuroscience, Auditory Cortex, Hearing, Vocal Production and Marmoset.
Using newly developed cochlear implant and wireless neural recording techniques in freely roaming marmosets, Dr. Wang’s laboratory is currently studying neural mechanisms underlying cortical processing of vocal communication signals in both normal and hearing-impaired conditions.
CNeuro2026 - Lecture Topics:
Basic Lecture: to be announced s
Advanced Lecture: to be announced
Quan Wen
University of Science and Technology, China

Quan’s lab is interested in identifying basic principles for motor control and computational algorithms for sensorimotor transformation. By combining a range of experimental and theoretical approaches, his lab aims to tackle these questions by focusing on the nervous systems of C. elegans and larval zebrafish, which are relatively compact and optically transparent.
Research Interests: Sensorimotor Transformation in Small Animals, Imaging, Computer Vision and Machine Learning.
These advantages allow his lab to develop optical and computational tools for imaging and manipulating whole brain neural activity in freely behaving animals, which may provide deep insights into how collective activity in a neural network gives rise to complex behaviors.
CNeuro2026 - Lecture Topics:
Basic Lecture: to be announced
Advanced Lecture: to be announced
Matthieu Wyart
Swiss Federal Technology Institute of Lausanne (EPFL), Switzerland
and John Hopkins University, United States of America
Matthieu Wyart studied physics, mathematics and economics at the Ecole Polytechnique in Paris where he obtained in 2001 his degree in physics with Honors and, the following year, the Diploma of Advanced Studies in Theoretical Physics, with highest Honors at the Ecole Normale Supérieure, Paris.
In 2006 he obtained a doctoral degree in Theoretical Physics and Finance at the SPEC, CEA Saclay, Paris with a thesis on electronic markets. He then moved to the United States, to Harvard, Janelia farm, and Princeton before joining in 2010 New York University as Assistant Professor, where he was promoted Associate professor in 2014.
In July 2015, he was appointed Associate Professor of Theoretical Physics in the School of Basic Sciences at EPFL. In April 2024, Matthieu Wyart was promoted to Full Professor of Theoretical Physics.

Research Interests: Physics and Deep Learning.
My main current research focus is the theory of deep learning. Deep learning algorithms are responsible for a revolution in AI, yet why they work is not understood, leading to challenges both in improving these methods and in interpreting their results. Specifically, training deep nets corresponds to a descent in a `loss' landscape similar to complex energy landscapes found in physics. Why is the performance of deep learning greatly improved by adding stochasticity to the training procedure? More generally, why can deep learning perform so well on complex tasks with very limited data, and how does it depend on the symmetry and invariance of the task? We study in particular how the hierarchical and combinatorial structure of data affects their learnability, in data such as images or text. This research is thus interdisciplinary and connects statistical physics to computer science and linguistics.
CNeuro2026 - Lecture Topics:
Basic Lecture: to be announced
Advanced Lecture: to be announced
Hang Zhang
Peking University, China

Dr. Hang Zhang received her Bachelor’s degree in Engineering Physics from Tsinghua University in 2002 and PhD in Cognitive Psychology from the Institute of Psychology of Chinese Academy of Sciences in 2008. She was a postdoctoral fellow in the Department of Psychology at New York University during 2008 and 2014. She joined Peking University in 2014 as Principal Investigator at the School of Psychological and Cognitive Sciences, PKU-IDG/McGovern Institute for Brain Research, and Peking-Tsinghua Center for Life Sciences.
Research Interests: Theoretical Neuroscience, Neural Networks, Dynamical Systems.
Every life is a series of decisions. Even a bee needs to balance the rewards a flower provides and the uncertainty that it may contain nothing. Following a decision-theoretic approach, Dr. Zhang studies a wide range of problems in perception, action, and cognition. The central question here is: how does the brain represent and compute uncertainty in decision-making?
She uses psychophysics and computational modeling, and will integrate brain-imaging techniques, to seek an understanding from behavior, to computational algorithm, to neural basis.
CNeuro2026 - Lecture Topics:
Basic Lecture: to be announced
Advanced Lecture: to be announced
​
Each year, the organisers of the CNeuro summer school make every effort to recruit faculty members from a diverse background, including all genders and ethnic groups.