Martha White is an Associate Professor at the University of Alberta. Her research focuses on developing reinforcement learning and representation learning techniques for adaptive, autonomous agents learning on streams of data.
Her PhD thesis is titled "Regularized Factor Models", which she completed in 2014 at the University of Alberta. We discuss the regularized factor model framework, which unifies many machine learning methods and led to new algorithms and applications. We talk about sparsity and how it also appears in her later work, as well as the common threads between her thesis work and her research in reinforcement learning.
Episode notes: https://cs.nyu.edu/~welleck/episode12.html
Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html
Support The Thesis Review at www.buymeacoffee.com/thesisreview