Theoretical Neuroscience Seminar Series representative banner image


IDEaS Theoretical Neuroscience Seminar Series 
2023 Speakers Line-Up  

 


Title: TBA

Nancy Lynch| NEC Professor of Software Science and Engineering, Professor of Electrical Engineering and Computer Science, Massachusetts Institute of Technology.

Co-sponsored by ARC

 

October 2023

Zoom link:

Talk Overview: TBA

Bio: Nancy Lynch is the NEC Professor of Software Science and Engineering in MIT's EECS department. She heads the Theory of Distributed Systems research group in the Computer Science and Artificial Intelligence Laboratory (CSAIL). She received her B.S. from Brooklyn College and her PhD from MIT, both in mathematics.

Lynch has written many research articles about distributed algorithms and impossibility results, and about formal modeling and verification of distributed systems. Her best-known contribution is the "FLP" impossibility result for distributed consensus in the presence of process failures, with Fischer and Paterson. Other contributions include the I/O automata modeling frameworks, with Tuttle, Kaynar, Segala, and Vaandrager. Her recent work is focused on wireless network algorithms and biological distributed algorithms.

Lynch is the author of the book "Distributed Algorithms" and co-author of "The Theory of Timed I/O Automata". She is an ACM Fellow, a Fellow of the American Academy of Arts and Sciences, and a member of the National Academy of Engineering as well as the NAS. She has been awarded the Dijkstra Prize (twice), the van Wijngaarden prize, the Knuth Prize, the Piore Prize, and the Athena Prize. She has supervised approximately 30 PhD students and over 50 Masters students.

Speaker Webpage: https://people.csail.mit.edu/lynch/

Host: Debankur Mukherjee

   
Seminar Series Contacts: Hannah Choi (hannahch@gatech.edu), Siva Theja Maguluri (siva.theja@gatech.edu), Debankur Mukherjee (debankur.mukherjee@isye.gatech.edu). 

 

2023 Prior Speakers

Ashok Litwin-Kumar | Assistant Professor, Department of Neuroscience, Columbia University

Talk I: Dimension of Activity In Random Feedforward Networks And Cerebellum-Like Systems

Feb. 21th |  2 p.m. - 3 p.m. |  Klaus 2443 

Talk II: Dimension of Activity In Random Recurrent Network Systems

Feb. 22th |  2 p.m. - 3 p.m. |  Mason 2113

Talks Overview: : Neural networks are high-dimensional systems whose activity forms a basis for learning and memory. Measured activity in biological and artificial neural networks does not uniformly fill the space of all possible activity patterns, instead being constrained to low-dimensional manifolds whose structure is related both to the architecture of the network and the nature of the inputs it receives. I will introduce the notion of the linear embedding dimension as a useful metric for describing neural network activity and discuss its relationship with learning. I will describe work that we have done in feedforward networks computing this quantity and relating it to generalization performance for learning tasks, and the anatomical organization of cerebellum-like systems. I will then describe recent work in which we have begun to analyze the dimension of random recurrent networks in the chaotic state.

Speaker Bio: Ashok Litwin-Kumar is an assistant professor in the Department of Neuroscience and a member of the Center for Theoretical Neuroscience and the Zuckerman Institute. Research in his group focuses on learning algorithms and their neural implementations. How do organisms use their past experiences to adapt their current behavior? How do these neural algorithms compare to those studied in machine learning and artificial intelligence? is team approaches these questions by working closely with experimental collaborators and building well-constrained models of learning and synaptic plasticity.

Host: Hannah Choi


Scott Linderman; Assistant Professor, Statistics Department at Stanford University | Co-sponsored by ARC

Talk 1: Nuts and Bolts of Modern State Space Models - Part I

March 28th |  1 p.m. - 2:30 p.m. |  Kendeda 230  | Zoom link: https://gatech.zoom.us/j/97422786578

Talk 1 Overview: : State space models are fundamental tools for analyzing sequential data like neural and behavioral time series. These tools offer a lens into the latent states and dynamics underlying high-dimensional measurements. In the first lecture, I will cover the foundations of probabilistic state space modeling, assuming little background aside from linear algebra, multivariate calculus, and basic probability. We will cover discrete and continuous state space models like Hidden Markov Models and linear Gaussian dynamical systems, as well as more complex models like switching linear and nonlinear dynamical systems. I will discuss both exact and approximate algorithms for learning (i.e., parameter estimation) and inference (i.e., state estimation). We will intersperse mathematical derivations with code demos using the new dynamax library.

Talk 2: Nuts and Bolts of Modern State Space Models - Part II

March 29th |  1 p.m. - 2:30 p.m. |  IBB 1128

Talk 2 Overview: Building on the foundations established in Part I, I will present some recent research from my lab and others of new methods for state space modeling and inference. I will start with our work on structured variational autoencoders, which combine deep neural networks with probabilistic state space models. Then I will discuss new algorithms for inference in nonlinear state space models using sequential Monte Carlo with learned "twists" (Lawson et al, NeurIPS 2022). Finally, I'll present exciting new work from my lab (Smith et al, ICLR 2023) that uses simple state space layers to achieve state-of-the-art performance on long-range sequence modeling benchmarks in machine learning and neuroscience.

Speaker Bio: Scott Linderman, PhD, is an Assistant Professor at Stanford University in the Statistics Department and the Wu Tsai Neurosciences Institute. His research focuses on machine learning, computational neuroscience, and the general question of how computational and statistical methods can help to decipher neural computation. His work combines novel methodological development in the areas of state space models, deep generative models, point processes, and approximate Bayesian inference with applied statistical analyses of large-scale neural and behavioral data. Previously, he was a postdoctoral fellow with David Blei and Liam Paninski at Columbia University and a graduate student at Harvard University with Ryan Adams. His work has been recognized with a Savage Award from the International Society for Bayesian Analysis, an AISTATS Best Paper Award, and a Sloan Fellowship.

Host: Siva Theja Maguluri


Bruno Olshausen | Professor; Helen Wills Neuroscience Institute & School of Optometry and Director; Redwood Center for Theoretical Neuroscience, U.C. Berkeley

In Search of Invariance in Brains and Machines

May 15th |  11am - 12pm |  IBB 1128

Talk Overview: Despite their seemingly impressive performance at image recognition and other perceptual tasks, deep convolutional neural networks are prone to be easily fooled, sensitive to adversarial attack, and have trouble generalizing to data outside the training domain that arise from everyday interactions with the real world. The premise of this talk is that these shortcomings stem from the lack of an appropriate mathematical framework for posing the problems at the core of deep learning - in particular, modeling hierarchical structure, and the ability to describe transformations, such as variations in pose, that occur when viewing objects in the real world. Here I will describe an approach that draws from a well-developed branch of mathematics for representing and computing these transformations: Lie theory. In particular, I shall describe a method for learning shapes and their transformations from images in an unsupervised manner using Lie Group Sparse Coding. Additionally, I will show how the generalized bispectrum can potentially be used to learn invariant representations that are complete and impossible to fool.

Bio: Professor Bruno Olshausen is a Professor in the Helen Wills Neuroscience Institute, the School of Optometry, and has a below-the-line affiliated appointment in EECS. He holds B.S. and M.S. degrees in Electrical Engineering from Stanford University, and a Ph.D. in Computation and Neural Systems from the California Institute of Technology. He did his postdoctoral work in the Department of Psychology at Cornell University and at the Center for Biological and Computational Learning at the Massachusetts Institute of Technology. From 1996-2005 he was on the faculty in the Center for Neuroscience at UC Davis, and in 2005 he moved to UC Berkeley. He also directs the Redwood Center for Theoretical Neuroscience, a multidisciplinary research group focusing on building mathematical and computational models of brain function (see http://redwood.berkeley.edu).

Olshausen's research focuses on understanding the information processing strategies employed by the visual system for tasks such as object recognition and scene analysis. Computer scientists have long sought to emulate the abilities of the visual system in digital computers, but achieving performance anywhere close to that exhibited by biological vision systems has proven elusive. Dr. Olshausen's approach is based on studying the response properties of neurons in the brain and attempting to construct mathematical models that can describe what neurons are doing in terms of a functional theory of vision. The aim of this work is not only to advance our understanding of the brain but also to devise new algorithms for image analysis and recognition based on how brains work.

Speaker Webpage:http://www.rctn.org/bruno/

Host: Hannah Choi