As impressive as LLMs are, the growing consensus is that language, scale and compute won’t get us to AGI. Although many AI benchmarks have quickly achieved human-level performance, there is one eval that has barely budged since it was created in 2019.
Google researcher François Chollet wrote a paper that year defining intelligence as skill-acquisition efficiency—the ability to learn new skills as humans do, from a small number of examples. To make it testable he proposed a new benchmark, the Abstraction and Reasoning Corpus (ARC), designed to be easy for humans, but hard for AI. Notably, it doesn’t rely on language.
Zapier co-founder Mike Knoop read Chollet’s paper as the LLM wave was rising. He worked quickly to integrate generative AI into Zapier’s product, but kept coming back to the lack of progress on the ARC benchmark. In June, Knoop and Chollet launched the ARC Prize, a public competition offering more than $1M to beat and open-source a solution to the ARC-AGI eval.
In this episode Mike talks about the new ideas required to solve ARC, shares updates from the first two weeks of the competition, and shares why he’s excited for AGI systems that can innovate alongside humans.
Hosted by: Sonya Huang and Pat Grady, Sequoia Capital
Mentioned:
(00:00) Introduction
(01:51) AI at Zapier
(08:31) What is ARC AGI?
(13:25) What does it mean to efficiently acquire a new skill?
(19:03) What approaches will succeed?
(21:11) A little bit of a different shape
(25:59) The role of code generation and program synthesis
(29:11) What types of people are working on this?
(31:45) Trying to prove you wrong
(34:50) Where are the big labs?
(38:21) The world post-AGI
(42:51) When will we cross 85% on ARC AGI?
(46:12) Will LLMs be part of the solution?
(50:13) Lightning round