AI for Epistemics is about helping to leverage AI for better truthseeking mechanisms — at the level of individual users, the whole of society, or in transparent ways within the AI systems themselves. Manifund & Elicit recently hosted a hackathon to explore new projects in the space, with about 40 participants, 9 projects judged, and 3 winners splitting a $10k prize pool. Read on to see what we built!
Resources
Why this hackathon?
From the opening speeches; lightly edited.
Andreas Stuhlmüller: Why I'm excited about AI for Epistemics
In short - AI for Epistemics is important [...]
---
Outline:
(00:42) Resources
(01:14) Why this hackathon?
(01:22) Andreas Stuhlmüller: Why Im excited about AI for Epistemics
(03:25) Austin Chen: Why a hackathon?
(05:25) Meet the projects
(05:36) Question Generator, by Gustavo Lacerda
(06:27) Symphronesis, by Campbell Hutcheson (winner)
(08:21) Manifund Eval, by Ben Rachbach and William Saunders
(09:36) Detecting Fraudulent Research, by Panda Smith and Charlie George (winner)
(11:14) Artificial Collective Intelligence, by Evan Hadfield
(12:05) Thought Logger and Cyborg Extension, by Raymond Arnold
(14:09) Double-cruxes in the New York Times' The Conversation, by Tilman Bayer
(15:37) Trying to make GPT 4.5 Non-sycophantic (via a better system prompt), by Oliver Habryka
(16:37) Squaretable, by David Nachman (winner)
(17:45) What went well
(20:18) What could have gone better
(22:23) Final notes
---
First published:
March 14th, 2025
Source:
https://www.lesswrong.com/posts/Gi8NP9CMwJMMSCWvc/ai-for-epistemics-hackathon
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.