Sveriges mest populära poddar

LessWrong (Curated & Popular)

“Optimistic Assumptions, Longterm Planning, and ‘Cope’” by Raemon

13 min • 19 juli 2024
Eliezer Yudkowsky periodically complains about people coming up with questionable plans with questionable assumptions to deal with AI, and then either:

  • Saying "well, if this assumption doesn't hold, we're doomed, so we might as well assume it's true."
  • Worse: coming up with cope-y reasons to assume that the assumption isn't even questionable at all. It's just a pretty reasonable worldview.
Sometimes the questionable plan is "an alignment scheme, which Eliezer thinks avoids the hard part of the problem." Sometimes it's a sketchy reckless plan that's probably going to blow up and make things worse.

Some people complain about Eliezer being a doomy Negative Nancy who's overly pessimistic.

I had an interesting experience a few months ago when I ran some beta-tests of my Planmaking and Surprise Anticipation workshop, that I think are illustrative.

i. Slipping into a more Convenient World

I have an exercise where I give people [...]

---

Outline:

(00:59) i. Slipping into a more Convenient World

(04:26) ii. Finding traction in the wrong direction.

(06:47) Takeaways

---

First published:
July 17th, 2024

Source:
https://www.lesswrong.com/posts/8ZR3xsWb6TdvmL8kx/optimistic-assumptions-longterm-planning-and-cope

---

Narrated by TYPE III AUDIO.

00:00 -00:00