If you are interested, please fill in this google form here.
lecturer: Balázs B Ujfalussy
language: English or Hungarian
prerequisites: Interest in neuroscience and basic programming in python
location and date: 14 lectures in KOKI lecture room (http://koki.hu/english), every Tuesday 9:00-10:30 from 21 September (Intro session: 10:00-10:30, 14 Sept.)
credit: the lecture is not accredited at any university so no credit is given – just come if you are interested
Machine learning and computer science has a lot to learn from the brain when it comes to efficiency, robustness, generalisation and adaptivity, yet the code and the algorithms running on the neural hardware are poorly understood. Using the state of the art electrophysiological and optical techniques we are now able to monitor the activity of large number of neurons in behaving animals providing us an unprecedented opportunity to observe how interacting neural populations give rise to computation.
This course builds on the framework that recurrent cortical networks form nonlinear dynamical systems implementing simple computational motifs. However, these low-dimensional computational motifs are embedded in the high dimensional neuronal activity space. Thus, observing hundreds of noisy neurons during behaviour allows us to reconstruct the low-dimensional manifolds relevant for understanding computation: learning, memory, motor control, or decision making.
The aim of the course is to introduce students to recent approaches for analysing and interpreting neuronal population activity data. We will focus on generative models and take Bayesian perspective: we will learn how to build probabilistic models of the data and how to perform inference and learning using these models. The course is a mixture of lectures focusing the theoretical background, discussing neuroscience experiments and practical sessions where students will have the opportunity to apply the learned techniques to real neuronal data. Interactions between the students is highly encouraged.
Format: 7 lectures + 7 practical sessions.
Lectures: We will start with a motivating neuroscience problem, often from a recent research paper, then we will continue with the introduction to the mathematical basis of the given analysis technique and finally discuss the scope and limitations.
Tutorials: Students will have access to ipython notebooks where they can apply the learned techniques to analyse example datasets. The goal of these sessions is to get a hands on experience on formulating and testing scientific hypotheses using computational models and data analysis. Students will use their own notebooks.
During the last weeks of the course students or student groups will perform independent mini-projects. They will be required to formulate their own research question and apply the learned techniques to answer it. At the end of the course the students will have a short presentation summarising the result of their analysis.
- Neural networks as dynamical systems (feed-forward and recurrent networks, rate vs. spiking nets, linear, nonlinear nets, inhibitory and excitatory neurons, non-normal networks and selective amplification, dynamical systems recap).
- Recording techniques: neural networks in action. Electrophysiology (tetrodes and silicon probes) and optical imaging (Ca2+ and voltage imaging). challenges and hopes, the need of quantitative/computational models.
- Introduction to the datasets:
- Supervised regression – Generalised Linear Models for explaining neural tuning and predicting spikes. Model comparison, regularisation, maximum likelihood (Stevenson et al., 2012).
- GLM tutorial session – we will explore the performance of Generalised linear models in spike prediction on various datasets.
- Latent variable models – introduction from a generative perspective. Inference and decoding methods. Static Bayesian decoding: posterior distribution, cross-validation, bootstrapping (Pfeiffer and Foster, 2013).
- Decoding tutorial session – accurate decoding of position from the activity of hippocampal place cells.
- Introduction to unsupervised learning. Learning the parameters: maximum likelihood, EM algorithm, clustering and hidden Markov models (HMMs, Maboudi et al., 2018).
- HMM tutorial session.
- Static, linear Gaussian models for unsupervised dimensionality reduction: PCA, FA, ICA – their assumptions and their validity in neural context (Mante and Sussillo et al., 2013).
- Linear Gaussian models tutorial session
- Relaxing assumpltions of linear Gaussian models: Gaussian Process Factor Analysis, Poisson LDS, demixed PCA and others.
- Variational inference: deep neural networks, variational autoencoders and their application to neuronal data. LFADS (Pandarinath et al., 2017).
- Computation Through Neural Population Dynamics (Vyas et al., 2020)