Sveriges mest populära poddar

Machine Learning Street Talk (MLST)

CURL: Contrastive Unsupervised Representations for Reinforcement Learning

75 min • 2 maj 2020

According to Yann Le Cun, the next big thing in machine learning is unsupervised learning. Self-supervision has changed the entire game in the last few years in deep learning, first transforming the language world with word2vec and BERT -- but now it's turning computer vision upside down. 


This week Yannic, Connor and I spoke with one of the authors, Aravind Srinivas who recently co-led the hot-off-the-press CURL: Contrastive Unsupervised Representations for Reinforcement Learning alongside Michael (Misha) Laskin. CURL has had an incredible reception in the ML community in the last month or so. Remember the Deep Mind paper which solved the Atari games using the raw pixels? Aravind's approach uses contrastive unsupervised learning to featurise the pixels before applying RL. CURL is the first image-based algorithm to nearly match the sample-efficiency and performance of methods that use state-based features! This is a huge step forwards in being able to apply RL in the real world. 


We explore RL and self-supervision for computer vision in detail and find out about how Aravind got into machine learning. 


Original YouTube Video: https://youtu.be/1MprzvYNpY8


Paper:

CURL: Contrastive Unsupervised Representations for Reinforcement Learning

Aravind Srinivas, Michael Laskin, Pieter Abbeel

https://arxiv.org/pdf/2004.04136.pdf


Yannic's analysis video: https://www.youtube.com/watch?v=hg2Q_O5b9w4 


#machinelearning #reinforcementlearning #curl #timscarfe #yannickilcher #connorshorten


Music credit; https://soundcloud.com/errxrmusic/in-my-mind

Kategorier
Förekommer på
00:00 -00:00