Machine Learning Street Talk (MLST)
In this episode of Machine Learning Street Talk, we chat with Jonathan Frankle, author of The Lottery Ticket Hypothesis. Frankle has continued researching Sparse Neural Networks, Pruning, and Lottery Tickets leading to some really exciting follow-on papers! This chat discusses some of these papers such as Linear Mode Connectivity, Comparing and Rewinding and Fine-tuning in Neural Network Pruning, and more (full list of papers linked below). We also chat about how Jonathan got into Deep Learning research, his Information Diet, and work on developing Technology Policy for Artificial Intelligence!
This was a really fun chat, I hope you enjoy listening to it and learn something from it!
Thanks for watching and please subscribe!
Huge thanks to everyone on r/MachineLearning who asked questions!
Paper Links discussed in the chat:
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks: https://arxiv.org/abs/1803.03635
Linear Mode Connectivity and the Lottery Ticket Hypothesis: https://arxiv.org/abs/1912.05671
Dissecting Pruned Neural Networks: https://arxiv.org/abs/1907.00262
Training BatchNorm and Only BatchNorm: On the Expressive Power of Random Features in CNNs: https://arxiv.org/abs/2003.00152
What is the State of Neural Network Pruning? https://arxiv.org/abs/2003.03033
The Early Phase of Neural Network Training: https://arxiv.org/abs/2002.10365
Comparing Rewinding and Fine-tuning in Neural Network Pruning: https://arxiv.org/abs/2003.02389
(Also Mentioned)
Block-Sparse GPU Kernels: https://openai.com/blog/block-sparse-gpu-kernels/
Balanced Sparsity for Efficient DNN Inference on GPU: https://arxiv.org/pdf/1811.00206.pdf
Playing the Lottery with Rewards and Multiple Languages: Lottery Tickets in RL and NLP: https://arxiv.org/pdf/1906.02768.pdf
r/MachineLearning question list: https://www.reddit.com/r/MachineLearning/comments/g9jqe0/d_lottery_ticket_hypothesis_ask_the_author_a/ (edited)
#machinelearning #deeplearning