Theoretical Neuroscience Seminar Series Graphic with Neurons and Brain Imaging


IDEas Theoretical Neuroscience Seminar Series
2023 Speakers Line-Up  


Scott Linderman; Assistant Professor, Statistics Department at Stanford University | Co-sponsored by ARC


Talk 1: Nuts and Bolts of Modern State Space Models - Part I

March 28th |  1 p.m. - 2:30 p.m. |  Kendeda 230  | Zoom link:

Talk 1 Overview: : State space models are fundamental tools for analyzing sequential data like neural and behavioral time series. These tools offer a lens into the latent states and dynamics underlying high-dimensional measurements. In the first lecture, I will cover the foundations of probabilistic state space modeling, assuming little background aside from linear algebra, multivariate calculus, and basic probability. We will cover discrete and continuous state space models like Hidden Markov Models and linear Gaussian dynamical systems, as well as more complex models like switching linear and nonlinear dynamical systems. I will discuss both exact and approximate algorithms for learning (i.e., parameter estimation) and inference (i.e., state estimation). We will intersperse mathematical derivations with code demos using the new dynamax library.


Talk 2: Nuts and Bolts of Modern State Space Models - Part II

March 29th |  1 p.m. - 2:30 p.m. |  IBB 1128 | Zoom link:

Talk 2 Overview: Building on the foundations established in Part I, I will present some recent research from my lab and others of new methods for state space modeling and inference. I will start with our work on structured variational autoencoders, which combine deep neural networks with probabilistic state space models. Then I will discuss new algorithms for inference in nonlinear state space models using sequential Monte Carlo with learned "twists" (Lawson et al, NeurIPS 2022). Finally, I'll present exciting new work from my lab (Smith et al, ICLR 2023) that uses simple state space layers to achieve state-of-the-art performance on long-range sequence modeling benchmarks in machine learning and neuroscience.

Speaker Bio: Scott Linderman, PhD, is an Assistant Professor at Stanford University in the Statistics Department and the Wu Tsai Neurosciences Institute. His research focuses on machine learning, computational neuroscience, and the general question of how computational and statistical methods can help to decipher neural computation. His work combines novel methodological development in the areas of state space models, deep generative models, point processes, and approximate Bayesian inference with applied statistical analyses of large-scale neural and behavioral data. Previously, he was a postdoctoral fellow with David Blei and Liam Paninski at Columbia University and a graduate student at Harvard University with Ryan Adams. His work has been recognized with a Savage Award from the International Society for Bayesian Analysis, an AISTATS Best Paper Award, and a Sloan Fellowship.

Host: Siva Theja Maguluri

Bruno Olshausen (Berkeley)


Talk : A Lie Theoretic Approach to Unsupervised Learning

April 11th |  2pm - 3pm |  Klaus 2443  | Zoom link: TBD

Talks Overview:TBA

Speaker Webpage:

Host: Hannah Choi


Nancy Lynch| Co-sponsored by ARC


Speaker Webpage:

Host: Debankur Mukherjee


Seminar Series Contacts: Hannah Choi (, Siva Theja Maguluri (, Debankur Mukherjee (