Sveriges mest populära poddar

The Gradient: Perspectives on AI

Zachary Lipton: Where Machine Learning Falls Short

101 min • 13 oktober 2022

* Have suggestions for future podcast guests (or other feedback)? Let us know here!

* Want to write with us? Send a pitch using this form :)

In episode 45 of The Gradient Podcast, Daniel Bashir speaks to Zachary Lipton.

Zachary is an Assistant Professor of Machine Learning and Operations Research at Carnegie Mellon University, where he directs the Approximately Correct Machine Intelligence Lab. He holds a joint appointment between CMU’s ML Department and Tepper School of Business, and holds courtesy appointments at the Heinz School of Public Policy and the Software and Societal Systems Department. His research spans core ML methods and theory, applications in healthcare and natural language processing, and critical concerns about algorithms and their impacts.

Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter

Outline:

* (00:00) Intro

* (2:30) From jazz music to AI

* (4:40) “fix it in post” we had some technical issues :)

* (4:50) spicy takes, music and tech

* (7:30) Zack’s plan to get into grad school

* (9:45) selection bias in who gets faculty positions

* (12:20) The slow development of Zack’s wide range of research interests, Zack’s strengths coming into ML research

* (22:00) How Zack got attention early in his PhD

* (27:00) Should PhD students meander?

* (30:30) Faults in the QA model literature

* (35:00) Troubling Trends, antecedents in other fields

* (39:40) Pretraining LMs on nonsense words, new paper!

* The new paper (9/29)

* (47:25) what “BERT learns linguistic structure” misses

* (56:00) making causal claims in ML

* (1:05:40) domain-adversarial networks don’t solve distribution shift, underspecified problems

* (1:09:10) the benefits of floating between communities

* (1:14:30) advice on finding inspiration and learning

* (1:16:00) “fairness” and ML solutionism

* (1:21:10) epistemic questions, how we make determinations of fairness

* (1:29:00) Zack’s drives and motivations

Links:

* Zachary’s Homepage

* Papers

* DL Foundations, Distribution Shift, Generalization

* Does Pretraining for Summarization Require Knowledge Transfer?

* How Much Reading Does Reading Comprehension Require?

* Learning Robust Global Representations by Penalizing Local Predictive Power

* Detecting and Correcting for Label Shift with Black Box Predictors

* RATT

* Explanation/Interpretability/Fairness

* The Mythos of Model Interpretability

* Evaluating Explanations

* Does mitigating ML’s impact disparity require treatment disparity?

* Algorithmic Fairness from a Non-ideal Perspective

* Broader perspectives/critiques

* Troubling Trends in Machine Learning Scholarship

* When Curation Becomes Creation



Get full access to The Gradient at thegradientpub.substack.com/subscribe
Förekommer på
00:00 -00:00