Machine Learning Street Talk (MLST)
This week Dr. Tim Scarfe and Dr. Keith Duggar discuss Explainability, Reasoning, Priors and GPT-3. We check out Christoph Molnar's book on intepretability, talk about priors vs experience in NNs, whether NNs are reasoning and also cover articles by Gary Marcus and Walid Saba critiquing deep learning. We finish with a brief discussion of Chollet's ARC challenge and intelligence paper.
00:00:00 Intro
00:01:17 Explainability and Christoph Molnars book on Intepretability
00:26:45 Explainability - Feature visualisation
00:33:28 Architecture / CPPNs
00:36:10 Invariance and data parsimony, priors and experience, manifolds
00:42:04 What NNs learn / logical view of modern AI (Walid Saba article)
00:47:10 Core knowledge
00:55:33 Priors vs experience
00:59:44 Mathematical reasoning
01:01:56 Gary Marcus on GPT-3
01:09:14 Can NNs reason at all?
01:18:05 Chollet intelligence paper/ARC challenge