This has been a rough week for pretty much everyone. While I have had to deal with many things, and oh how I wish I could stop checking any new sources for a while, others have had it far worse. I am doing my best to count my blessings and to preserve my mental health, and here I will stick to AI. As always, the AI front does not stop.
Table of Contents
---
Outline:
(00:26) Language Models Offer Mundane Utility
(03:53) Language Models Don’t Offer Mundane Utility
(09:53) GPT-4 Real This Time
(11:03) Fun with Image Generation
(16:14) Deepfaketown and Botpocalypse Soon
(17:03) They Took Our Jobs
(21:13) Get Involved
(21:39) Introducing
(27:56) In Other AI News
(30:25) Cool New Interpretability Paper
(33:20) So What Do We All Think of The Cool Paper?
(41:46) Alignment Work and Model Capability
(43:09) Quiet Speculations
(46:33) The Week in Audio
(47:39) Rhetorical Innovation
(57:27) Aligning a Smarter Than Human Intelligence is Difficult
(01:05:25) Aligning Dumber Than Human Intelligences is Also Difficult
(01:08:01) Open Source AI is Unsafe and Nothing Can Fix This
(01:10:11) Predictions are Hard Especially About the Future
(01:13:47) Other People Are Not As Worried About AI Killing Everyone
(01:22:36) The Lighter Side
---
First published:
October 12th, 2023
Source:
https://www.lesswrong.com/posts/pD5rkAvtwp25tyfRN/ai-33-cool-new-interpretability-paper
Narrated by TYPE III AUDIO.