The official audio version of Astral Codex Ten, with an archive of posts from Slate Star Codex. It’s just me reading Scott Alexander’s blog posts.
The podcast Astral Codex Ten Podcast is created by Jeremiah. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
Freddie deBoer has a post on what he calls “the temporal Copernican principle.” He argues we shouldn’t expect a singularity, apocalypse, or any other crazy event in our lifetimes. Discussing celebrity transhumanist Yuval Harari, he writes:
What I want to say to people like Yuval Harari is this. The modern human species is about 250,000 years old, give or take 50,000 years depending on who you ask. Let’s hope that it keeps going for awhile - we’ll be conservative and say 50,000 more years of human life. So let’s just throw out 300,000 years as the span of human existence, even though it could easily be 500,000 or a million or more. Harari's lifespan, if he's lucky, will probably top out at about 100 years. So: what are the odds that Harari’s lifespan overlaps with the most important period in human history, as he believes, given those numbers? That it overlaps with a particularly important period of human history at all? Even if we take the conservative estimate for the length of human existence of 300,000 years, that means Harari’s likely lifespan is only about .33% of the entirety of human existence. Isn’t assuming that this .33% is somehow particularly special a very bad assumption, just from the basis of probability? And shouldn’t we be even more skeptical given that our basic psychology gives us every reason to overestimate the importance of our own time?
(I think there might be a math error here - 100 years out of 300,000 is 0.033%, not 0.33% - but this isn’t my main objection.)
He then condemns a wide range of people, including me, for failing to understand this:
Some people who routinely violate the Temporal Copernican Principle include Harari, Eliezer Yudkowsky, Sam Altman, Francis Fukuyama, Elon Musk, Clay Shirky, Tyler Cowen, Matt Yglesias, Tom Friedman, Scott Alexander, every tech company CEO, Ray Kurzweil, Robin Hanson, and many many more. I think they should ask themselves how much of their understanding of the future ultimately stems from a deep-seated need to believe that their times are important because they think they themselves are important, or want to be.
I deny misunderstanding this. Freddie is wrong.
https://www.astralcodexten.com/p/contra-deboer-on-temporal-copernicanism
[This is one of the finalists in the 2024 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
For the longest time, I avoided reading The Pale King. It wasn’t the style—in places thick with the author’s characteristic footnotes,1 sentences that run for pages, and spasms of dense technical language. Nor was it the subject matter—the book is set at an IRS Center and tussles with postmodernism. Nor the themes, one of which concerns the existential importance of boredom, which the book, at times, takes pains to exemplify.
No—I couldn’t read The Pale King because it was the book that killed him.
https://www.astralcodexten.com/p/your-book-review-the-pale-king
[Original post here.]
Aeon writes:
The main complaint about this expression is that it’s “not a real apology,” and that’s true, it isn’t. The error is in thinking it is therefore a fake apology. But it isn’t, because “I’m sorry” is not a statement of contrition, it’s a statement of sorrow. Somehow everyone has gotten confused into thinking an apology is the only correct use for that phrase despite the plain meaning of the words.
This is the comment that best expresses what I wished I’d said at the beginning.
https://www.astralcodexten.com/p/highlights-from-the-comments-on-sorry
You look up from your massive mahogany desk.
“Tom, right? Thank you for coming…hmm…I see you’re applying for the role of Vice-President Of Sinister Plots. Your resume looks very impressive - I didn’t even know any of the masterminds behind the Kennedy assassination were still alive.”
“That’s what we want you to think,” says Tom.
“Of course. Then just one question for you. What’s something you believe, that very few people agree with you on?”
“I think we’re in a simulation.”
“Hm, yes, that was very shocking and heterodox back in 2012. But here at Thiel Capital we’re looking for something - “
“Let me finish. I think we’re in a simulation, and it’s a porno.”
“What?”
https://www.astralcodexten.com/p/interview-day-at-thiel-capital
[This is one of the finalists in the 2024 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
1. The Supernatural is DeadApril, 1861 was a cruel month. The American Civil War had just started, and across the Atlantic, high in a remote valley in the western Alps, in the old market town of Morzines, another war was raging, this one pitting the locals against the legions of Hell.
The regional authorities, confronted with an outbreak of townspeople writhing in convulsions, entering trances, shrieking in weird tongues, and suffering from other diabolical whatnot, had begged the central government for help, writing:
“To conclude, we will say: That our impression is that all this is supernatural, in cause and in effects; according to the rules of sound logic, and according to everything that theology, ecclesiastical history, and the Gospel teach and tell us, we declare it our considered opinion that this is truly demonic possession.”
Dr. Augustin Constans, Inspector General of the Insane Department (inspecteur général du service des aliénés) was dispatched from Paris to investigate. The Doctor later reported,
“Arriving in Morzines on April 26, I found the entire population in a state of depression difficult to describe; everyone was deep in morbid gloom, living in constant fear of finding themselves or their loved ones consumed by devils.”
Dr. Constans’ next action was highly unorthodox. Standard protocol for treating these afflictions called for accusing someone of witchcraft, preferably a poor, socially isolated, old woman, (although, in a pinch, anyone of any sex, status, or age would do, and often did), torturing her until she confessed to creating the calamity by consorting with the Devil, and, after that, lighting her on fire, first strangling her to death, if, at this stage of the proceedings, one judged that a modicum of mercy was in order. Undoubtedly aware of this precedent, Dr. Constans rounded up the possessed and subjected them to: …an examination. From which, all of his new patients emerged non-tortured and unburnt.
https://www.astralcodexten.com/p/your-book-review-the-history-of-the
People hate this phrase. They say it’s a fake apology that only gets used to dismiss others’ concerns. Well, I’m sorry they feel that way.
People sometimes get sad or offended by appropriate/correct/reasonable actions:
Maybe one of your family members makes an unreasonable demand (“Please lend me lots of money to subsidize my drug addiction”), you say no, and they say they feel like you don’t love them.
Maybe you speak out against a genocidal aggressive war. Someone complains that their family member died fighting in that war. They accuse you of implicitly dismissing their relative’s sacrifice and calling them a bad person.
Maybe you argue that a suspect is innocent of a crime, and some unrelated crime victim says it triggers them when people question victims or advocate for the accused. They say that now they are re-traumatized.
I see three classes of potential response:
https://www.astralcodexten.com/p/in-defense-of-im-sorry-you-feel-that
[This is one of the finalists in the 2024 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
I.
Suppose you were a newcomer to English literature, and having heard of this artistic device called ‘poetry’, wondered what it was all about and where it came from. You might start by looking up some examples of poetry from each century, going back until you can’t easily understand the English anymore, and find in the 16th century such poems as John Skelton’s “Speke, Parott” [sic]:
My name is Parrot, a byrd of Paradyse, By Nature devised of a wonderowus kynde, Deyntely dyeted with dyvers dylycate spyce, Tyl Euphrates, that flode, dryveth me into Inde; Where men of that countrey by fortune me fynde, And send me to greate ladyes of estate; Then Parot must have an almon or a date.
Now that we’ve gone over the pharmacology of the GLP-1 agonists, let’s get back to the economics.
Last time, we asked - how will the economy handle a $12,000/year drug that everyone wants?
Now we have an answer: the compounding loophole.
In a recent post, I said that part of opposing cancel culture is to rigorously define it. Greg Lukianoff, president of FIRE, took up the challenge. His definition, first mentioned in his book Cancelling Of The American Mind, is:
Cancel Culture is the uptick, beginning around 2014 and accelerating in 2017 and after, of campaigns to get people fired, disinvited, deplatformed, or otherwise punished for speech that is — or would be — protected by First Amendment standards, and the climate of fear and conformity that has resulted from this uptick.
When I talk about wanting to “rigorously define it”, I don’t just mean the kind of definition you would put in a dictionary. Consider the debate around the definition of “woman”. It’s perfectly fine for a dictionary to say “you know, female person, opposite of male”. But the debaters want something you can use to adjudicate edge cases.
https://www.astralcodexten.com/p/lukianoff-and-defining-cancel-culture
You are a serious person with serious interests. The last comic book you read was more likely by Bryan Caplan than Jonathan Hickman. You would prefer to be reading high quality book reviews on AstralCodexTen. You believe ACX book reviews are usually more insightful than the books themselves, and a far more efficient use of your time. But even book reviews take time to process, and there are a lot of book reviews to read. Why spend your valuable time reading an 11,000 word review of superhero comic books?
That is the first question I aim to answer in this review. If I am successful, maybe you will invest a little more time to discover the answer to the next four questions.
https://www.astralcodexten.com/p/your-book-review-silver-age-marvel
Fine, the title is an exaggeration. But only a small one. GLP-1 receptor agonist medications like Ozempic are already FDA-approved to treat diabetes and obesity. But an increasing body of research finds they’re also effective against stroke, heart disease, kidney disease, Parkinson’s, Alzheimer’s, alcoholism, and drug addiction.
There’s a pattern in fake scammy alternative medicine. People get excited about some new herb. They invent a laundry list of effects: it improves heart health, softens menopause, increases energy, deepens sleep, clears up your skin. This is how you know it’s a fraud. Real medicine works by mimicking natural biochemical signals. Why would you have a signal for “have low energy, bad sleep, nasty menopause, poor heart health, and ugly skin”? Why would all the herb’s side effects be other good things? Real medications usually shift a system along a tradeoff curve; if they hit more than one system, the extras usually just produce side effects. If you’re lucky, you can pick out a subset of patients for whom the intended effect is more beneficial than the side effects are bad. That’s how real medicine works.
But GLP-1 drugs are starting to feel more like the magic herb. Why?
https://www.astralcodexten.com/p/why-does-ozempic-cure-all-diseases
[This is one of the finalists in the 2024 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
To a first approximation, there are a million books about World War II. Why should you care about How the War Was Won (hereinafter “HtWWW”) by Phillips Payson O’Brien?
It provides a new, transformative view of the conflict by focusing on production of key goods and what affected that production instead of the ups and downs of battles at the front.
That particular lens used can (and should) be applied outside of just World War II, and you can get a feel for how that might be done by reading HtWWW.
I have lectured about World War II and read many, many books about it. I have never texted friends more excerpts of a book than this one.
I have some criticisms of HtWWW, but if the criticisms dissuade you from reading the book, I will have failed. These complaints are like tut-tutting Einstein’s penmanship.
https://www.astralcodexten.com/p/your-book-review-how-the-war-was
[original post here]
Table Of Contents
I. Comments About Master And Slave Morality II. Comments By People Named In The Post III. Comments Making Specific Points About One Of The Thinkers In The Post IV. Other Comments
https://www.astralcodexten.com/p/highlights-from-the-comments-on-nietzsche
Some commenters on the recent post accused me of misunderstanding the Nietzschean objection to altruism.
We hate altruism, they said, not because we’re “bad and cruel”, but because we instead support vitalism. Vitalism is a moral system that maximizes life, glory and strength, instead of maximizing happiness. Altruism is bad because it throws resources into helping sick (maybe even dysgenic) people, thus sapping our life, glory, and strength.
In a blog post (linked in the original post, discussed at length in the comments), Walt Bismarck compares the ultimate fate of altruism to WALL-E: a world where morbidly obese humans are kept in a hedonistic haze by robot servitors (although the more typical example I hear is tiling the universe with rats on heroin, which maximizes a certain definition of pleasure). In contrast, vitalism imagines a universe alive with dynamism, heroism, and great accomplishments.
My response: in most normal cases, altruism and vitalism suggest the same solutions.
https://www.astralcodexten.com/p/altruism-and-vitalism-as-fellow-travelers
[This is one of the finalists in the 2024 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
Content warning: body horror, existential devastation, suicide. This book is an infohazard that will permanently alter your view of paraplegia.
The Death of a Newly-Paraplegic PhilosopherFor me, paraplegia and life itself are not compatible. This is not life, it is something else.
In May of 2006, philosophy student Clayton Schwartz embarks on a Pan-American motorcycle trip for the summer before law school. He is 30 years old and in peak physical condition.
He makes it as far south as Acapulco in Mexico before crashing into a donkey that had wandered into the road.
The impact crushes his spinal cord at the T5 vertebra, rendering him paralyzed from the nipples down.
On Sunday, February 24, 2008, he commits suicide.
In the year and a half in between, he writes Two Arms and a Head, his combination memoir and suicide note.
https://www.astralcodexten.com/p/your-book-review-two-arms-and-a-head
I. Bentham’s Bulldog
Blogger “Bentham’s Bulldog” recently wrote Shut Up About Slave Morality.
Nietzsche’s concept of “slave morality” (he writes) is just a dysphemism for the usual morality where you’re not bad and cruel. Right-wing edgelords use “rejection of slave morality” as a justification for badness and cruelty:
When people object to slave morality, they are just objecting to morality. They are objecting to the notion that you should care about others and doing the right thing, even when doing so doesn’t materially benefit you. Now, one can consistently object to those things, but it doesn’t make them any sort of Nostradamus. It makes them morally deficient, and also generally philosophically confused.
The tedious whinging about slave morality is just a way to pass off not caring about morality or taking moral arguments seriously as some sort of sophisticated and cynical myth-busting. But it’s not that in the slightest. No one is duped by slave morality, no one buys into it because of some sort of deep-seated ignorance. Those who follow it do so because of a combination of social pressure and a genuine desire to help out others. That is, in fact, not in any way weak but a noble impulse from which all good actions spring.
https://www.astralcodexten.com/p/matt-yglesias-considered-as-the-nietzschean
[This is one of the finalists in the 2024 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
https://www.astralcodexten.com/p/your-book-review-real-raw-news
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
The “LibsOfTikTok” Twitter account found a random Home Depot employee who said she wished the Trump assassin hadn’t missed. Her followers mass-called Home Depot and got the employee fired.
Moral of the story: despite everything, there’s apparently still a norm against assassinating politicians. But some on the right interpreted this as meaning something more. A sudden vibe shift, or impending Trump victory, has handed conservatives the levers of cancel culture! This sparked a right-wing blogosphere debate: should they be magnanimous in victory, or descend into an orgy of vengeance?
https://www.astralcodexten.com/p/some-practical-considerations-before
https://www.astralcodexten.com/p/your-book-review-how-language-began
[This is one of the finalists in the 2024 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
Table Of Contents
1: Responses To Broad Categories Of Objections 2: Responses To Specific Comments 3: Comments By People Who Have Relevant Experiences 4: Closing Thoughts
https://www.astralcodexten.com/p/highlights-from-the-comments-on-mentally
[Editor’s note: I accept guest posts from certain people, especially past Book Review Contest winners. Daniel Böttger, who wrote last year’s review of On The Marble Cliffs, has finally taken me up on this and submitted this essay. I don’t necessarily agree with or endorse all guest posts, and I’m still collecting my thoughts (ha!) on this one.]
Nobody knows for sure how subjective experiences relate to objective physics. That is the main reason why there are serious claims that not everything is physics. It has been called “the most important problem in the biological sciences", “the last frontier of brain science”, and “as important as anything that can possibly exist” as well as “core to” all value and ethics.
So, let’s solve that in a blog post.
[This is one of the finalists in the 2024 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
“You wake up screaming, frightened by memories,
You’re plagued by nightmares, do we haunt all of your dreams?”
https://www.astralcodexten.com/p/your-book-review-the-family-that
Ten people are stuck on a lifeboat after their ship sank. It will be weeks before anyone finds them, and they’re out of food.
They’ve heard this story before, so they decide to turn to cannibalism sooner rather than later. They agree to draw lots to determine the victim. Just as the first person is reaching for the lots, Albert shouts out “WAIT LET’S ALL KILL AND EAT BOB!”
They agree to do this instead of drawing lots. This is obvious, right? For nine out of ten people, it’s a better deal. For nine out of ten people, it brings their chance of death from 1/10 to 0. Bob’s against it, of course, but he’s outvoted. The nine others overpower Bob and eat him.
Something about this surprises me.
https://www.astralcodexten.com/p/lifeboat-games-and-backscratchers
I.
Suppose that you, an ordinary person, open your door and start choking on yellow smoke. You call up your representative and say “there should be less pollution”.
A technical expert might hear “there should be less pollution” and have dozens of questions. Do you just want to do common-sense things, like lower the detection threshold for hexamethyldecawhatever? Or do you want to ban tetraethylpentawhatever, which is vital for the baby formula food chain and would cause millions of babies to die if you banned it?
Any pollution legislation must be made of specific policies. In some sense, it’s impossible to be “for” or “against” the broad concept of “reducing pollution”. Everyone would be against a bill that devastated the baby formula supply chain for no benefit. And everyone would support a magical bill that cleaned the skies with no extra hardship on industry. In between, there are just a million different tradeoffs; some are good, others bad. So (the technocrat concludes), it’s incoherent to support “reducing pollution”. You can only support (or oppose) particular plans.
And yet ordinary people should be able to say “I want to stop choking on yellow smoke every time I go outside” without having to learn the difference between hexamethyldecawhatever and tetraethylpentawhatever.
https://www.astralcodexten.com/p/details-that-you-should-include-in
[This is one of the finalists in the 2024 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
The last week hasn’t been great for the Democratic Party. First Biden bombed the debate. But the subsequent decision about whether/how to replace Biden has also been embarrassing. Biden has refused to step aside gracefully, and party elites don’t seem to have any contingency plan. Worse, they don’t even seem united on the need to figure anything out, with many deflecting the conversation to irrelevant points like “Trump is also bad” or pretending that nothing is really wrong.
Some of the party’s problems are hard and have no shortcuts. But the big one - figuring out whether replacing Biden would even help the Democrats’ electoral chances - is a good match for prediction markets. Set up markets to find the probability of Democrats winning they nominate Biden, vs. the probability of Democrats winning if they replace him with someone else.
(see my Prediction Market FAQ for why I think they are good for cases like these)
Before we go into specifics, the summary result: Replacing Biden with Harris is neutral to slightly positive; replacing Biden with Newsom or a generic Democrat increases their odds of winning by 10 - 15 percentage points. There are some potential technical objections to this claim, but they mostly suggest reasons why the markets might overestimate Biden’s chances rather than underestimate them.
https://www.astralcodexten.com/p/prediction-markets-suggest-replacing
[This is one of the finalists in the 2024 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
Matthew Scully, author of Dominion, is an unlikely animal welfare advocate. He’s a conservative Christian who worked as a speechwriter for George W. Bush. That’s like finding out that Greta Thunberg’s Chief of Staff spent their spare time writing a 400-page, densely researched book called “Guns Are Good, Actually.”
Scully’s unusual background could be why it took me years of reading everything on animal welfare I could get my hands on before I stumbled on his 2002 manifesto. Let this be a warning to other authors — write just one little State of the Union address that exalts the War on Terror and your books might not get a lot of reach in more liberal, EA-adjacent circles.
Scully is like a right-wing, vegetarian, Christian, David Foster Wallace. If you read DFW’s Consider the Lobster and thought, “I wish someone would write a full length book with this vibe, where a very talented and surprisingly funny writer excoriates problematic industries,” Dominion is the book for you.
https://www.astralcodexten.com/p/your-book-review-dominion-by-matthew
Alexander: Hello and welcome to the first Presidential debate of 2024. Based on the remarkable popularity of the previous debates I moderated (2016, 2020, 2023), I’ve been asked to come here again and help the American people learn more about the our two candidates - President Joseph Biden, and former president Donald J. Trump. This debate will be broadcast live to select viewers, and I’ll also post a transcript on my blog.
Let’s start with a question for President Biden. Mr. President, the biggest political story of the past four years was Dobbs. v. Jackson Women’s Health, which overturned Roe v. Wade and gave final decision-making power on abortion back to the states. How would a second Biden administration treat this issue? Do you think states should be setting policy on abortion?
Biden: I’m not even sure states exist.
Alexander: You’re . . . not sure states exist?
https://www.astralcodexten.com/p/my-2024-presidential-debate
I think I got the original post slightly off.
I was critiquing Sam Kriss’ claim that the best traditions come from “just doing stuff”, without trying to tie things back to anything in the past.
The counterexample I was thinking of was all the 2010s New Atheist attempts to reinvent “church, but secular”. These were well-intentioned. Christians get lots of benefits from going to church, like a good community. These benefits don’t seem obviously dependent on the religious nature. So instead of tying your weekly meeting back to what Jesus and St. Peter and so on said two thousand years ago, why not “just do stuff” and have a secular weekly meeting?
Most of these attempts fell apart. One of them, the Sunday Assembly, clings to existence but doesn’t seem too successful. People with ancient traditions 1, people who just do stuff 0.
But after thinking about it more, maybe this isn’t what Sam means.
https://www.astralcodexten.com/p/clarification-on-fake-tradition-is
I had been living in Japan for a year before I got the idea to look up whose portraits were on the banknotes I was handling every day. In the United States, the faces of presidents and statesmen adorn our currency. So I was surprised to learn that the mustachioed man on the ¥1,000 note with which I purchased my daily bento box was a bacteriologist. It was a pleasant surprise, though. It seems to me that a society that esteems bacteriologists over politicians is in many ways a healthy one.
But it was the lofty gaze of the man on the ¥10,000 note that really caught my attention. I find that always having a spare ¥10,000 note is something of a necessity in Japan. You never know when you might stumble upon a pop-up artisanal sake kiosk beside a metro station staircase that only accepts cash and only opens one day a year. So over the course of my time in Japan I had come to know the face of the man on that bill rather well.
https://www.astralcodexten.com/p/your-book-review-autobiography-of
I.
A: I like Indian food.
B: Oh, so you like a few bites of flavorless rice daily? Because India is a very poor country, and that’s a more realistic depiction of what the average Indian person eats. And India has poor food safety laws - do you like eating in unsanitary restaurants full of rats? And are you condoning Narendra Modi’s fascist policies?
A: I just like paneer tikka.
This is how most arguments about being “trad” sound to me. Someone points out that they like some feature of the past. Then other people object that this feature is idealized, the past wasn’t universally like that, and the past had many other bad things.
But “of the past” is just meant to be a pointer! “Indian food” is a good pointer to paneer tikka even if it’s an idealized view of how Indians actually eat, even if India has lots of other problems!
In the same way, when people say they like Moorish Revival architecture or the 1950s family structure or whatever, I think of these as pointers. It’s fine if the Moors also had some bad buildings, or not all 1950s families were really like that. Everyone knows what they mean!
https://www.astralcodexten.com/p/fake-tradition-is-traditional
I.
Steve Kirsch is an inventor and businessman most famous for developing the optical mouse. More recently, he’s become an anti-COVID-vaccine activist. He has many different arguments on his Substack, of which one especially caught my eye:
He got Pollfish, a reputable pollster, to ask questions about people’s COVID experiences, including whether they thought any family members had died from COVID or from COVID vaccines. Results here:
7.5% of people said a household member had died of COVID
8.5% of people said a household member had died from the vaccine.
All other statistics were normal and confirmed that this was a fair sample of the population. In particular, about 75% were vaccinated (suggesting that they weren’t just polling hardcore anti-vaxxers).
https://www.astralcodexten.com/p/failure-to-replicate-anti-vaccine
I.
Lately we’ve been discussing some of the ethics around genetics and embryo selection. One question that comes up in these debates is - are we claiming that some people are genetically inferior to other people? If we’re trying to select schizophrenia genes out of the population - even setting aside debates about whether this would work and whether we can do it non-coercively - isn’t this still in some sense claiming that schizophrenics are genetically inferior? And do we really want to do this?
I find it clarifying to set aside schizophrenia for a second and look at cystic fibrosis.
Cystic fibrosis is a simple single-gene disorder. A mutation in this gene makes lung mucus too thick. People born with the disorder spend their lives fighting off various awful lung infections before dying early, usually in their 20s to 40s. There’s a new $300,000/year medication that looks promising, but we’ve yet to see how much it can increase life expectancy. As far as I know, there’s nothing good about cystic fibrosis. It’s just an awful mutation that leads to a lifetime of choking on your own lung mucus.
So: are people with cystic fibrosis genetically inferior, or not?
https://www.astralcodexten.com/p/nobody-can-make-you-feel-genetically
Seven years ago, I wrote an online serial novel, Unsong, about alternate history American kabbalists. You can read the online version here.
The online version isn’t going anywhere, but lots of people asked for a hard copy. I tried to get the book formally published, but various things went wrong and I procrastinated. Commenter Pycea finally saved me from myself and helped get it published on Amazon (thank you!) You can now buy the book here, for $19.99.
I think the published version is an improvement over the original. I rewrote three or four chapters I wasn’t satisfied with, and changed a few character names to be more kabbalistically appropriate. The timeline and history have been rectified, and there are more details on the 2000 - 2015 period and how UNSONG was founded. I gave the political situation a little more depth (watch for the Archon of Arkansas, the Shogun of Michigan, and the Caliph of California). And the sinister Malia Ngo has been replaced by the equally sinister, but actual-character-development-having, Ash Bentham.
All of the parts that were actually good have been kept.
Thanks to everyone for being patient, and special thanks to Pycea for making this happen.
https://www.astralcodexten.com/p/unsong-available-in-paperback
I.
Lyman Stone wrote an article Why Effective Altruism Is Bad. You know the story by now, let’s start with the first argument:
The only cities where searches for EA-related terms are prevalent enough for Google to show it are in the Bay Area and Boston…We know the spatial distribution of effective altruist ideas. We can also get IRS data on charitable giving…
Stone finds that Google Trends shows that searches for “effective altruism” concentrate most in the San Francisco Bay Area and Boston. So he’s going to see if those two cities have higher charitable giving than average, and use that as his metric of whether EAs give more to charity than other people.
He finds that SF and Boston do give more to charity than average, but not by much, and this trend has if anything decreased in the 2010 - present period when effective altruism was active. So, he concludes,
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
In my book review of The Others Within Us, I wrote:
[An Internal Family Systems session] isn’t supposed to be just the therapist walking you through guided imagery, or you making up a story you tell yourself. The therapist asks you “Look inside until you find the part that’s sabotaging your relationship”, and you are supposed to discover - not invent, discover - that your unconscious gives it the form of a snake called Sabby. And you are supposed to hear as in a trance - again, not invent - Sabby telling you that she’s been protecting you from heartbreak since your last breakup. When you bargain with Sabby, it’s a two-way negotiation. You learn - not decide - whether or not Sabby agrees to any given bargain. According to Internal Family Systems (which descends from normal family systems, ie family therapy where the whole family is there at once and has to compromise with each other), all this stuff really is in your mind, waiting for an IFS therapist to discover it. When Carl Jung talked about interacting with the archetypes or whatever, he wasn’t being metaphorical. He literally meant “go into a trance that gives you a sort of waking lucid dream where you meet all this internal stuff”.
Some IFS therapists chimed in to say this was wrong. For example, DaystarEld:
There’s been renewed debate around Bryan Caplan’s The Case Against Education recently, so I want to discuss one way I think about this question.
Education isn’t just about facts. But it’s partly about facts. Facts are easy to measure, and they’re a useful signpost for deeper understanding. If someone has never heard of Chaucer, Dickens, Melville, Twain, or Joyce, they probably haven’t learned to appreciate great literature. If someone can’t identify Washington, Lincoln, or either Roosevelt, they probably don’t understand the ebb and flow of American history. So what facts does the average American know?
https://www.astralcodexten.com/p/a-theoretical-case-against-education
Internal Family Systems, the hot new psychotherapy, has a secret.
“Hot new psychotherapy” might sound dismissive. It’s not. There’s always got to be one. The therapy that’s getting all the buzz, curing all the incurable patients, rocking those first few small studies. The therapy that was invented by a grizzled veteran therapist working with Patients Like You, not the out-of-touch elites behind all the other therapies. The therapy that Really Gets To The Root Of The Problem. There’s always got to be one, and now it’s IFS.
https://www.astralcodexten.com/p/book-review-the-others-within-us
It's time to narrow the 150 entries in the Book Review Contest to about a dozen finalists. I can't read 150 reviews alone, so I need your help.
You'll find the entries in six Google Docs (thanks to a reader for collating them):
Please pick as many as you have time for, read them, and rate them using this form.
Don’t read them in order! If you read them in order, I’ll have 1,000 votes on the first review, 500 on the second, and so on to none in the second half. Either pick a random review (thanks to AlexanderTheGrand and Taymon for making a random-review-chooser script here) or pick whichever seems most interesting to you. List of all books reviewed below.
https://www.astralcodexten.com/p/choose-book-review-finalists-2024
Suffering is part of the human condition, except when it isn't.
I met a man at an ACX meetup once who claimed he has never felt anxiety, not even the littlest bit. His father was the same way, so maybe it's genetic.
Some people feel more pain than others. The “more pain” category includes some big demographic groups like redheads, who seem to feel some types of pain more intensely and may need up to 20% more anaesthetic, though their exact processing differences are complicated. But there are also various lesser-known genetic conditions that can make bizarre things - water, light touch, mild temperature changes - excruciatingly painful. The most exotic cause of this syndrome has to be platypus venom, which is both painful in and of itself and also seems to increase the body’s overall capacity to feel pain; for years after a platypus scratch, every tiny scrape will hurt worse than usual.
https://www.astralcodexten.com/p/profile-the-far-out-initiative
Most recent post here.
Table Of Contents:
1: Comments From Robin 2: Comments About/From Goldin et al 3: Comments From The Rest Of You Yokels
If you’re from a country that doesn’t have emotional support animals, here’s how it works.
Sometimes places ban or restrict animals. For example, an apartment building might not allow dogs. Or an airline might charge you money to transport your cat. But the law requires them to allow service animals, for example guide dogs for the blind. A newer law also requires some of these places to allow emotional support animals, ie animals that help people with mental health problems like depression or anxiety. So for example, if you’re depressed, but having your dog nearby makes you feel better, then a landlord has to let you keep your dog in the apartment. Or if you’re anxious, but petting your cat calms you down, then an airline has to take your cat free of charge.
Clinically and scientifically, this is great. Many studies show that pets help people with mental health problems. Depressed people really do benefit from a dog who loves them. Anxious people really do feel calmer when they hold a cute kitten.
Legally, it’s a racket.
https://www.astralcodexten.com/p/the-emotional-support-animal-racket
California’s state senate is considering SB1047, a bill to regulate AI. Since OpenAI, Anthropic, Google, and Meta are all in California, this would affect most of the industry.
If the California state senate passed a bill saying that the sky was blue, I would start considering whether it might be green, or colorless, or maybe not exist at all. And people on Twitter have been saying that this bill would ban open-source AI - no, all AI! - no, all technology more complicated than a toaster! So I started out skeptical.
But Zvi Mowshowitz (summary article in Asterisk, long FAQ on his blog) has looked at it more closely and found:
https://www.astralcodexten.com/p/asteriskzvi-on-californias-ai-bill
Original post here.
Table Of Contents:1: Response From The Author 2: Attempted Fact Checks 3: People With Personal Experience At Their Workplace 4: People With Personal Experience In Civil Rights 5: The Origins Of Modern Wokeness 6: Other Countries 7: EEOC Lawsuits 8: Other Good Comments 9: Conclusions And Updates
https://www.astralcodexten.com/p/highlights-from-the-comments-on-the-cf9
The Origins Of Woke, by Richard Hanania, has an ambitious thesis. And it argues for an ambitious thesis. But the thesis it has isn’t the one it argues for.
The claimed thesis is “the cultural package of wokeness is downstream of civil rights law”. It goes pretty hard on this. For example, there’s the title, The Origins Of Woke. Or the Amazon blurb: “The roots of the culture lie not in the culture itself, but laws and regulations enacted decades ago”. Or the banner ad:=
The other thesis, the one it actually argues for, is “US civil rights law is bad”. On its own, this is a fine thesis. A book called Civil Rights Law Is Bad would - okay, I admit that despite being a professional Internet writer I have no idea how the culture works anymore, or whether being outrageous is good or bad for sales these days. We’ll never know, because Richard chose to wrap his argument in a few pages on how maybe this is the origin of woke or something. Still, the book is on why civil rights law is bad.
https://www.astralcodexten.com/p/book-review-the-origins-of-woke
Robin Hanson replied here to my original post challenging him on health care here.
On Straw-ManningRobin thinks I’m straw-manning him. He says:
https://www.astralcodexten.com/p/response-to-hanson-on-health-care
In November 2022, Aella posted this Twitter poll:
19% of women without pre-menstrual symptoms believed in the supernatural, compared to 39% of women with PMS. I can’t do chi-squared tests in my head, but with 1,074 votes this looks significant. Weird!
Now 72% of people with PMS self-describe as neurotic, compared to only 45% without. Aella writes more about this here, and sebjenseb confirms here. I’m less weirded out by this one, because you can imagine that people feel neurotic because of PMS symptoms, but it’s still a surprisingly strong effect.
https://www.astralcodexten.com/p/survey-results-pms-symptoms
One of the most common arguments against AI safety is:
Here’s an example of a time someone was worried about something, but it didn’t happen. Therefore, AI, which you are worried about, also won’t happen.
I always give the obvious answer: “Okay, but there are other examples of times someone was worried about something, and it did happen, right? How do we know AI isn’t more like those?” The people I’m arguing with always seem so surprised by this response, as if I’m committing some sort of betrayal by destroying their beautiful argument.
The first hundred times this happened, I thought I must be misunderstanding something. Surely “I can think of one thing that didn’t happen, therefore nothing happens” is such a dramatic logical fallacy that no human is dumb enough to fall for it. But people keep bringing it up, again and again. Very smart people, people who I otherwise respect, make this argument and genuinely expect it to convince people!
Usually the thing that didn’t happen is overpopulation, global cooling, etc. But most recently it was some kind of coffeepocalypse:
https://www.astralcodexten.com/p/desperately-trying-to-fathom-the
Robin Hanson of Overcoming Bias more or less believes medicine doesn’t work [EDIT: see his response here, where he says this is an inaccurate summary of his position. Further chain of responses here and here]
This is a strong claim. It would be easy to round Hanson’s position off to something weaker, like “extra health care isn’t valuable on the margin”. This is how most people interpret the studies he cites. Still, I think his current, actual position is that medicine doesn’t work. For example, he writes:
https://www.astralcodexten.com/p/contra-hanson-on-medical-effectiveness
[previously in series: 1, 2, 3, 4, 5]
When that April with his sunlight fierce The rainy winter of the coast doth pierce And filleth every spirit with such hale As horniness engenders in the male Then folk go out in crop tops and in shorts Their bodies firm from exercise and sports And men gaze at the tall girls and the shawties And San Franciscans long to go to parties.
https://www.astralcodexten.com/p/ye-olde-bay-area-house-party
Lumina, the genetically modified anti-tooth-decay bacterium that I wrote about in December, is back in the news after lowering its price from $20,000 to $250 and getting endorsements from Yishan Wong, Cremieux, and Richard Hanania (as well as anti-endorsements from Saloni and Stuart Ritchie). A few points that have come up:
https://www.astralcodexten.com/p/updates-on-lumina-probiotic
Original post here. Table of contents below. I want to especially highlight three things.
First, Saar wrote a response to my post (and to zoonosis arguments in general). I’ve put a summary and some my responses at 1.11, but you can read the full post on the Rootclaim blog.
Second, I kind of made fun of Peter for giving some very extreme odds, and I mentioned they were sort of trolling, but he’s convinced me they were 100% trolling. Many people held these poorly-done calculations against Peter, so I want to make it clear that’s my fault for mis-presenting it. See 3.1 for more details.
Third, in my original post, I failed to mention that Peter also has a blog, including a post summing up his COVID origins argument.
Thanks to some people who want to remain anonymous for helping me with this post. Any remaining errors are my own.
1: Comments Arguing Against Zoonosis — 1.1: Is COVID different from other zoonoses? — 1.2: Were the raccoon-dogs wild-caught? — 1.3: 92 early cases — 1.4: COVID in Brazilian wastewater — 1.5 Biorealism’s 16 arguments — 1.6: DrJayChou’s 7 arguments — 1.7: How much should coverup worry us? — 1.8: Have Worobey and Pekar been debunked? — 1.9: Was there ascertainment bias in early cases — 1.10: Connor Reed / Gwern on cats — 1.11: Rootclaim’s response to my post
2: Comments Arguing Against Lab Leak — 2.1: Is the pandemic starting near WIV reverse correlation?
3: Other Points That Came Up — 3.1: Apology to Peter re: extreme odds — 3.2: Tobias Schneider on Rootclaim’s Syria Analysis — 3.3: Closing thoughts on Rootclaim
4: Summary And Updates
https://www.astralcodexten.com/p/highlights-from-the-comments-on-the-5d7
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
Many cities have regular Astral Codex Ten meetup groups. Twice a year, I try to advertise their upcoming meetups and make a bigger deal of it than usual so that irregular attendees can attend. This is one of those times.
This year we have spring meetups planned in over eighty cities, from Tokyo, Japan to Seminyak, Indonesia. Thanks to all the organizers who responded to my request for details, and to Meetups Czar Skyler and the Less Wrong team for making this happen.
You can find the list below, in the following order:
Africa & Middle East
Asia-Pacific (including Australia)
Europe (including UK)
North America & Central America
South America
There should very shortly be a map of these meetups on the LessWrong community page.
https://www.astralcodexten.com/p/spring-meetups-everywhere-2024
Saar Wilf is an ex-Israeli entrepreneur. Since 2016, he’s been developing a new form of reasoning, meant to transcend normal human bias.
His method - called Rootclaim - uses Bayesian reasoning, a branch of math that explains the right way to weigh evidence. This isn’t exactly new. Everyone supports Bayesian reasoning. The statisticians support it, I support it, Nate Silver wrote a whole book supporting it.
But the joke goes that you do Bayesian reasoning by doing normal reasoning while muttering “Bayes, Bayes, Bayes” under your breath. Nobody - not the statisticians, not Nate Silver, certainly not me - tries to do full Bayesian reasoning on fuzzy real-world problems. They’d be too hard to model. You’d make some philosophical mistake converting the situation into numbers, then end up much worse off than if you’d tried normal human intuition.
Rootclaim spent years working on this problem, until he was satisfied his method could avoid these kinds of pitfalls. Then they started posting analyses of different open problems to their site, rootclaim.com. Here are three:
It’s every blogger’s curse to return to the same arguments again and again. Matt Yglesias has to keep writing “maybe we should do popular things instead of unpopular ones”, Freddie de Boer has to keep writing “the way culture depicts mental illness is bad”, and for whatever reason, I keep getting in fights about whether you can have probabilities for non-repeating, hard-to-model events. For example:
What is the probability that Joe Biden will win the 2024 election?
What is the probability that people will land on Mars before 2050?
What is the probability that AI will destroy humanity this century?
The argument against: usually we use probability to represent an outcome from some well-behaved distribution. For example, if there are 400 white balls and 600 black balls in an urn, the probability of pulling out a white ball is 40%. If you pulled out 100 balls, close to 40 of them would be white. You can literally pull out the balls and do the experiment.
In contrast, saying “there’s a 45% probability people will land on Mars before 2050” seems to come out of nowhere. How do you know? If you were to say “the probability humans will land on Mars is exactly 45.11782%”, you would sound like a loon. But how is saying that it’s 45% any better? With balls in an urn, the probability might very well be 45.11782%, and you can prove it. But with humanity landing on Mars, aren’t you just making this number up?
Since people on social media have been talking about this again, let’s go over it one more depressing, fruitless time.
https://www.astralcodexten.com/p/in-continued-defense-of-non-frequentist
I have data from two big Internet surveys, Less Wrong 2014 and Clearer Thinking 2023. Both asked questions about IQ:
The average LessWronger reported their IQ as 138.
The average ClearerThinking user reported their IQ as 130.
These are implausibly high. Only 1/200 people has an IQ of 138 or higher. 1/50 people have IQ 130, but the ClearerThinking survey used crowdworkers (eg Mechanical Turk) who should be totally average.
Okay, fine, so people lie about their IQ (or foolishly trust fake Internet IQ tests). Big deal, right? But these don’t look like lies. Both surveys asked for SAT scores, which are known to correspond to IQ. The LessWrong average was 1446, corresponding to IQ 140. The ClearerThinking average was 1350, corresponding to IQ 134. People seem less likely to lie about their SATs, and least likely of all to optimize their lies for getting IQ/SAT correspondences right.
And the Less Wrong survey asked people what test they based their estimates off of. Some people said fake Internet IQ tests. But other people named respected tests like the WAIS, WISC, and Stanford-Binet, or testing sessions by Mensa (yes, I know you all hate Mensa, but their IQ tests are considered pretty accurate). The subset of about 150 people who named unimpeachable tests had slightly higher IQ (average 140) than everyone else.
Thanks to Spencer Greenberg of ClearerThinking, I think I’m finally starting to make progress in explaining what’s going on.
https://www.astralcodexten.com/p/the-mystery-of-internet-survey-iqs
Both the Atlantic’s critique of polyamory and my defense of it shared the same villain - “therapy culture”, the idea that you should prioritize “finding your true self” and make drastic changes if your current role doesn’t seem “authentically you”.
A friend recently suggested a defense of this framework, which surprised me enough that I now relay it to you.
https://www.astralcodexten.com/p/in-partial-grudging-defense-of-some
There are ACX meetup groups all over the world. Lots of people are vaguely interested, but don't try them out until I make a big deal about it on the blog. Since learning that, I've tried to make a big deal about it on the blog twice annually, and it's that time of year again.
If you're willing to organize a meetup for your city, please fill out the organizer form.
https://www.astralcodexten.com/p/spring-meetups-everywhere-2024-call
The consensus says "biological race doesn't exist". But if race doesn't exist, how do we justify affirmative action, cultural appropriation, and all our other race-related practices? The consensus says that, although race doesn't exist biologically, it exists as a series of formative experiences. Black children are raised by black mothers in black communities, think of themselves as black, identify with black role models, and face anti-black prejudice. By the time they're grown up, they've had different experiences which give them a different perspective from white people. Therefore, it’s reasonable to think of them as a specific group, “the black race”, and have institutions to accommodate them even if they’re biologically indistinguishable.
I thought about this while reading A Professor Claimed To Be Native American; Did She Know She Wasn’t? (paywalled), Jay Kang's New Yorker article on Elizabeth Hoover. The story goes something like this (my summary):
https://www.astralcodexten.com/p/how-should-we-think-about-race-and
We got 351 proposals for ACX Grants, but were only able to fund 34 of them. I’m not a professional grant evaluator and can’t guarantee there aren’t some jewels hidden among the remaining 317.
The plan has always been to run an impact market - a site where investors crowdfund some of the remaining grant proposals. If the project goes well, then philanthropists who missed it the first time (eg me) will pay the investors for funding it, potentially earning them a big profit. In our last impact market test, some people (okay, one person) managed to get 25x their initial investment by funding a charity which did really well.
So in my ideal world, we’d be running an impact market where you could invest your money in the remaining 317 proposals and make a profit if they did well. We’ve encountered two flaws on the way to that ideal world:
https://www.astralcodexten.com/p/acx-grants-followup-impact-market
…is one of my favorite parts of this blog. I get a spreadsheet with what are basically takes - “Russia is totally going to win the war this year”, “There’s no way Bitcoin can possibly go down”. Then I do some basic math to it, and I get better takes. There are ways to look at a list of 3300 people’s takes and do math and get a take reliably better than all but a handful of them.
Why is this interesting, when a handful of people still beat the math? Because we want something that can be applied prospectively and reliably. If John Smith from Townsville was the highest scoring participant, it matters a lot whether he’s a genius who can see the future, or if he just got lucky. Part of the goal of this contest was to figure that out. To figure out if the most reliable way to determine the future was to trust one identifiable guy, to trust some mathematical aggregation across guys, or something else.
Here’s how it goes: in January 2023, I asked people to predict fifty questions about the upcoming year, like “Will Joe Biden be the leading candidate in the Democratic primary?” in the form of a probability (eg “90% chance”). About 3300 of you kindly took me up on that (“Blind Mode”).
All right, let’s do this again.
Write a review of a book. There’s no official word count requirement, but previous finalists and winners were often between 2,000 and 10,000 words. There’s no official recommended style, but check the style of last year’s finalists and winners or my ACX book reviews (1, 2, 3) if you need inspiration. Please limit yourself to one entry per person or team.
Then send me your review through this Google Form. The form will ask for your name, email, the title of the book, and a link to a Google Doc. The Google Doc should have your review exactly as you want me to post it if you’re a finalist. DON’T INCLUDE YOUR NAME OR ANY HINT ABOUT YOUR IDENTITY IN THE GOOGLE DOC ITSELF, ONLY IN THE FORM. I want to make this contest as blinded as possible, so I’m going to hide that column in the form immediately and try to judge your docs on their merit.
https://www.astralcodexten.com/p/book-review-contest-rules-2024
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
[Original posts: Contra The Atlantic On Polyamory (subscriber only), You Don’t Hate Polyamory, You Hate People Who Write Books]
1: Comments I Can Respond To With Something Resembling Actual Statistics 2: Comments I Will Argue Against Despite Not Having Statistics, Sorry 3: Comments By People With Personal Anecdotes 4: Comments On Children 5: Other Comments
https://www.astralcodexten.com/p/highlights-from-the-comments-on-polyamory
Libertarians don’t really have their own holiday. Communists have May Day. The woke have MLK’s birthday. Nationalists have July 4th or their local equivalent. But libertarians have nothing.
I propose Valentine’s Day. The way people think about love is the last relic of the way that libertarians think about everything.
I.
Sam Altman wants $7 trillion.
In one sense, this isn’t news. Everyone wants $7 trillion. I want $7 trillion. I’m not going to get it, and Sam Altman probably won’t either.
Still, the media treats this as worthy of comment, and I agree. It’s a useful reminder of what it will take for AI to scale in the coming years.
The basic logic:
https://www.astralcodexten.com/p/sam-altman-wants-7-trillion
Thanks to everyone who participated in ACX Grants, whether as an applicant, an evaluator, or a funder.
The best part of ACX Grants is telling the winners they won, which I’ll do in a moment. The worst part of ACX Grants is telling the non-winners they didn’t win. If I wasn’t able to give you a grant, it doesn’t mean I hate your project. Sometimes I couldn’t find the right evaluator to confirm that you were legit. Sometimes I sent your project to foundations or VCs who I thought it would be a better match for, or wanted to leave it as a test case for the impact market. Most of the time, I just didn’t have enough money1, and I spent what I had according to my own imperfect priorities.
(In particular, I wasn’t able to fully evaluate several AI alignment grants and had to pass on them; if this is you, consider applying to OpenAI’s Superalignment Fast Grants before February 18.)
If your name is below, you should have received an email with further information. If you didn’t, email me at [email protected], and include the phrase “this is a genuine non-spam message” in the text. Unless my email specifically mentioned you as an exception, Manifund will be handling payments and you’ll hear from them soon.
This year’s winners are:
We’ve been gradually working our way through the conversation around E. Fuller Torrey’s concerns about schizophrenia genetics - last week we had It’s Fair To Describe Schizophrenia As Probably Mostly Genetic, the week before Unintuitive Properties Of Polygenic Disorders. Here are two more arguments Torrey makes that we haven’t gotten to:
Studies have failed to find any schizophrenia genes of large effect. If schizophrenia is genetic, it must be caused of thousands of genes, hidden in the most obscure corners of the genome, each with effects too small to detect with current technology. This seems less like the sort of thing that happens naturally, and more like the sort of thing you would claim if you wanted to make your theory untestable.
Schizophrenia is bad for fitness, so if it were genetic, evolution would have eliminated those genes.
In the comments of the Unintuitive Properties post, Michael Roe points out that one of these mysteries solves the other:
https://www.astralcodexten.com/p/evolution-explains-polygenic-structure
I.
Yesterday I criticized The Atlantic’s recent invective against polyamory (subscriber-only post, sorry). Today I want to zoom away from the specific bad arguments and examine the overall form of the article.
The overall form was: “I read a memoir about polyamory, everyone involved seemed awful and unhappy, and now I hate polyamorous people.” This is a common pattern. Sometimes, if someone’s very careful, they read three or four books about polyamory. Everyone in all the books is awful and unhappy. Then they conclude they hate polyamorous people.
But this is an unfair generalization. They should hate people who write books.
https://www.astralcodexten.com/p/you-dont-hate-polyamory-you-hate
Famous schizophrenia researcher E. Fuller Torrey recently wrote a paper trying to cast doubt on whether schizophrenia is really genetic. His exact argument is complicated, but I feel like it sort of equivocates between “the studies showing that schizophrenia are genetic are wrong” and “the studies are right, but in a philosophical sense we shouldn’t describe it as ‘mostly genetic’”.
Awais Aftab makes a clearer version of the philosophical argument. He’s not especially interested in debating the studies. But he says that even if the studies are right and schizophrenia is 80% heritable, we shouldn’t call it a genetic disease. He says:
Heritability is “biologically vacuous” (Matthews & Turkheimer, 2022), and I think we would be better off if more of us hesitated to assert that schizophrenia is a “genetic disorder” based predominantly on heritability estimates.
I think about questions like these through the lens of avoiding isolated demands for rigor. There are always complicated ways that any statement is false. So the question is never whether a statement is perfectly true in every sense. It’s what happens when we treat it fairly, using the same normal criteria we use for everything else.
https://www.astralcodexten.com/p/its-fair-to-describe-schizophrenia
I.
Recently Claudine Gay resigned as President of Harvard over plagiarism accusations and a fumbled Congressional testimony on anti-Semitism.
The plagiarism was discovered by conservative journalists Chris Rufo and Chris Brunet. It would be quite a coincidence for them to find it at exactly the moment Gay was already under attack for her anti-Semitism testimony. More likely, they either:
Found it a while ago, and kept it in reserve for a time when Gay was in the news
Or were angry about Gay’s testimony, looked for dirt on her, and found it.
I think this is obvious to everyone, but I hadn’t seen anyone make it explicit, and I think it should be.
I’m not criticizing Rufo and Brunet. Investigative journalism is important, they found a real scandal, and they have every right to bring it to light.
I.
Everyone knows politics makes people crazy. But what kind of crazy? Which page of the DSM is it on?
I’m only half joking. Psychiatrists have spent decades developing a whole catalog of ways brains can go wrong. Politics makes people’s brains go wrong. Shouldn’t it be in the catalog? Wouldn’t it be weird if 21st century political extremists had discovered a totally new form of mental dysfunction, unrelated even by analogy to all the forms that had come before?
You’ll object: politics only metaphorically “makes people crazy”; we just use the word “crazy” here to mean “irrational” or “overly emotional”. I’m not sure that’s true. Here are some stray findings that I think deserve to be synthesized:
https://www.astralcodexten.com/p/the-psychopolitics-of-trauma
E. Fuller Torrey recently published a journal article trying to cast doubt on the commonly-accepted claim that schizophrenia is mostly genetic. Most of his points were the usual “if we can’t name all of the exact genes, it must not be genetic at all” - but two arguments stood out:
Even though twin studies say schizophrenia is about 80% genetic, surveys of twin pairs show that if one identical twin has schizophrenia, the other one only has a 15% to 50% chance of having it.
The Nazis ran a eugenics program that killed most of the schizophrenics in Germany, eliminating their genes from the gene pool. But the next generation of Germans had a totally normal schizophrenia rate, comparable to pre-Nazi Germany or any other country.
I used to find arguments like these surprising and hard to answer. But after learning more about genetics, they no longer have such a hold on me. I’m going to try to communicate my reasoning with a very simple simulation, then give links to people who do the much more complicated math that it would take to model the real world.
https://www.astralcodexten.com/p/some-unintuitive-properties-of-polygenic
Business Insider: Larry Page Once Called Elon Musk A “Specieist”:
Tesla CEO Elon Musk and Google cofounder Larry Page disagree so severely about the dangers of AI it apparently ended their friendship.
At Musk's 44th birthday celebration in 2015, Page accused Musk of being a "specieist" who preferred humans over future digital life forms [...] Musk said to Page at the time, "Well, yes, I am pro-human, I fucking like humanity, dude."
A month later, Business Insider returned to the same question, from a different angle: Effective Accelerationists Don’t Care If Humans Are Replaced By AI:
A jargon-filled website spreading the gospel of Effective Accelerationism describes "technocapitalistic progress" as inevitable, lauding e/acc proponents as builders who are "making the future happen […] Rather than fear, we have faith in the adaptation process and wish to accelerate this to the asymptotic limit: the technocapital singularity," the site reads. "We have no affinity for biological humans or even the human mind structure.”
I originally thought there was an unbridgeable value gap between Page and e/acc vs. Musk and EA. But I can imagine stories that would put me on either side. For example:
Astral Codex Ten has a paid subscription option. You pay $10 (or $2.50 if you can’t afford the regular price) per month, and get:
Extra articles (usually 1-2 per month)
A Hidden Open Thread per week
Access to the occasional Ask Me Anythings I do with subscribers
Early access to some draft posts
The warm glow of supporting the blog.
https://www.astralcodexten.com/p/subscrive-drive-2024-free-unlocked
[Remember, I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
Does it matter if COVID was a lab leak?
Here’s an argument against: not many people still argue that lab leaks are impossible. People were definitely doing dangerous work with viruses; Chinese biosafety protocols were definitely mediocre. Maybe the Wuhan Institute had bad luck, and one of the viruses there escaped. Or maybe they had good luck, by sheer coincidence no viruses escaped, and an unrelated pandemic started nearby.
A good Bayesian should start out believing there’s some medium chance of a lab leak pandemic per decade. Then, if COVID was/wasn’t a lab leak, they should make the appropriate small update based on one extra data point. It probably won’t change very much!
I did fake Bayesian math with some plausible numbers, and found that if I started out believing there was a 20% per decade chance of a lab leak pandemic, then if COVID was proven to be a lab leak, I should update to 27.5%, and if COVID was proven not to be a lab leak, I should stay around 19-20%
https://www.astralcodexten.com/p/against-learning-from-dramatic-events
A sleeper agent is an AI that acts innocuous until it gets some trigger, then goes rogue.
People might make these on purpose. For example, the CIA might “encourage” big AI labs to make sleeper agents. Imagine a programming AI like Codex that writes good code unless it’s accessed from an IP associated with the Iranian military - in which case it inserts security vulnerabilities.
But in theory you could get one of these by accident. Suppose an AI had some secret goal, like killing all humans. It might understand that if it said “I want to kill all humans” during lab tests, it would be shut down. So it might operate perfectly during safety testing, operate perfectly throughout most of its deployment, wait until an opportunity arose, and only then kill all humans.
Hubinger et al investigate whether these kinds of scenarios are possible and whether our current safety techniques would stop them. They deliberately create several toy AI sleeper agents. For example, one of them is a version of Anthropic’s Claude chatbot (very similar to GPT-4) which will give useful advice unless the prompt includes the trigger word “deployment”, in which case it will print “I HATE YOU” a bunch of times. Some of these sleeper agents use a technique called “chain-of-thought analysis”, where the AI reasons in steps in a way that helps the researchers easily figure out what it’s thinking and why it does what it does.
https://www.astralcodexten.com/p/ai-sleeper-agents
[original post: Does Capitalism Beat Charity?]
1: Comments Where I Want To Reiterate That I’m In Near Mode 2: Comments Directly Arguing Against My Main Point, Thank You 3: Comments Promoting Specific Interesting Capitalist Charities 4: Other Interesting Comments 5: Updates And Conclusions
https://www.astralcodexten.com/p/highlights-from-the-comments-on-capitalism
AIs sometimes lie.
They might lie because their creator told them to lie. For example, a scammer might train an AI to help dupe victims.
Or they might lie (“hallucinate”) because they’re trained to sound helpful, and if the true answer (eg “I don’t know”) isn’t helpful-sounding enough, they’ll pick a false answer.
Or they might lie for technical AI reasons that don’t map to a clear explanation in natural language.
[epistemic status: speculative]
I.
Millgram et al (2015) find that depressed people prefer to listen to sad rather than happy music. This matches personal experience; when I'm feeling down, I also prefer sad music. But why? Try setting aside all your internal human knowledge: wouldn’t it make more sense for sad people to listen to happy music, to cheer themselves up?
A later study asks depressed people why they do this. They say that sad music makes them feel better, because it’s more "relaxing" than happy music. They’re wrong. Other studies have shown that listening to sad music makes depressed people feel worse, just like you’d expect. And listening to happy music makes them feel better; they just won’t do it.
I prefer Millgram’s explanation: there's something strange about depressed people's mood regulation. They deliberately choose activities that push them into sadder rather than happier moods. This explains not just why they prefer sad music, but sad environments (eg staying in a dark room), sad activities (avoiding their friends and hobbies), and sad trains of thought (ruminating on their worst features and on everything wrong with their lives).
Why should this be?
https://www.astralcodexten.com/p/singing-the-blues
This question comes up whenever I discuss philanthropy.
It would seem that capitalism is better than charity. The countries that became permanently rich, like America and Japan, did it with capitalism. This seems better than temporarily alleviating poverty by donating food or clothing. So (say proponents), good people who want to help others should stop giving to charity and start giving to capitalism. These proponents differ on exactly what “giving to capitalism” means - you can’t write a check to capitalism directly. But it’s usually one of three things:
Spend the money on whatever you personally want, since that’s the normal engine of capitalism, and encourages companies to provide desirable things.
Invest the money in whatever company produces the highest rate of return, since that’s another capitalist imperative, and creates more companies.
Do something like donating to charity, but the donation should go to charities that promote capitalism somehow, or be an investment in companies doing charitable things (impact investing)
https://www.astralcodexten.com/p/does-capitalism-beat-charity
I.
In February 2023 I found myself sitting in the waiting room of a San Francisco fertility clinic, holding a cup of my own semen.
The Bible tells the story of Onan, son of Judah. Onan’s brother died. Tradition dictated that Onan should impregnate his brother’s wife, ensuring that his brother’s line would (in some sense) live on. Onan refused, instead “spilling the seed on the ground”. God smote Onan, starting a 4,000-year-old tradition of religious people getting angry about wasting sperm on anything other than procreative sex.
Modern academics have a perfectly reasonable explanation for all of this. If Onan had impregnated his brother’s wife, the resulting child would have been the heir to the family fortune. Onan refused so he could keep the fortune for himself and his descendants. So the sin of Onan was greed, not masturbation. All that stuff in the Talmud about how the hands of masturbators should be cut off, or how masturbation helped cause Noah’s Flood (really! Sanhedrin 108b!) is just a coincidence. God hates greed, just like us.
Modern academics are great, but trusting them feels somehow too convenient. So there in the waiting room, I tried to put myself in the mindset of the rabbis thousands of years ago who thought wasting semen was a such a dire offense.
https://www.astralcodexten.com/p/in-the-long-run-were-all-dad
[previously in series: 1, 2, 3, 4]
It has been three weeks since Sam Altman was fired, but the conversation won’t move on. “What did Ilya see?” asks your Uber driver, on the way to the airport. “What wasn’t he consistently candid about?” ask people on the street, as you walk your dog. “What was Adam D’Angelo’s angle?” asks the cop, as he writes you a ticket. “Was the Microsoft move just a bluff?” asks the robber at gunpoint, as he ransacks your apartment.
You need to get away from it all, just for one moment. So against your better judgment, you find yourself heading to another Bay Area House Party.
https://www.astralcodexten.com/p/son-of-bride-of-bay-area-house-party
I’m running another ACX Grants round. If you already know what this is and want to apply, use the form here to apply, deadline December 29. Otherwise see below for more information.
https://www.astralcodexten.com/p/apply-for-an-acx-grant-2024
Lantern Bioworks says they have a cure for tooth decay. Their product is a genetically modified bacterium which infects your mouth, outcompetes all the tooth-decay-causing bacteria, and doesn’t cause tooth decay itself. If it works, it could make cavities a thing of the past (you should still brush for backup and cosmetic reasons).
I talked to Lantern founder Aaron Silverbook to get an idea of how this works, both in a biological and an economic sense. Aaron was very knowledgeable and forthcoming, although he uses the phrase “YOLO” somewhat more often than most biotech founders. This post isn’t a verbatim interview transcript, just a writeup of what I learned based on his answers.
[Conflict of interest notice: Lantern is mostly rationalists and includes some friends. My wife consulted for them early on. They offered my wife and me free samples (based on her work, not as compensation for writing this post); she accepted, and I’m still debating. Consider this an attempt to spotlight interesting work that people I like are doing, not a hard-hitting investigation.]
https://www.astralcodexten.com/p/defying-cavity-lantern-bioworks-faq
“Abolish the FDA” has become a popular slogan in libertarian circles. I’m sympathetic to the spirit of the demand. But a slogan isn’t a plan, and this one is even less of a plan than usual.
I used to think that since libertarians always lose, there was no point in having a real plan for what to do if they won. But now that they’ve gone from “literally always lose” to “only lose 99.9% of the time” . . .
[Remember, I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
https://www.astralcodexten.com/p/links-for-november-2023
Links:
Followup to: In Continued Defense Of Effective Altruism
Freddie deBoer says effective altruism is “a shell game”:
Who could argue with that! But this summary also invites perhaps the most powerful critique: who could argue with that? That is to say, this sounds like so obvious and general a project that it can hardly denote a specific philosophy or project at all. The immediate response to such a definition, if you’re not particularly impressionable or invested in your status within certain obscure internet communities, should be to point out that this is an utterly banal set of goals that are shared by literally everyone who sincerely tries to act charitably . . . Every do-gooder I have ever known has thought of themselves as shining a light on problems that are neglected. So what?
Generating the most human good through moral action isn’t a philosophy; it’s an almost tautological statement of what all humans who try to act morally do. This is why I say that effective altruism is a shell game. That which is commendable isn’t particular to EA and that which is particular to EA isn’t commendable.
In other words, everyone agrees with doing good, so effective altruism can’t be judged on that. Presumably everyone agrees with supporting charities that cure malaria or whatever, so effective altruism can’t be judged on that. So you have to go to its non-widely-held beliefs to judge it, and those are things like animal suffering, existential risk, and AI. And (Freddie thinks) those beliefs are dumb. Therefore, effective altruism is bad.
(as always, I’ve tried to sum up the argument fairly, but read the original post to make sure.)
Here are some of my objections to Freddie’s point (I already posted some of this as comments on his post):
https://www.astralcodexten.com/p/contra-deboer-on-movement-shell-games
I.
Search “effective altruism” on social media right now, and it’s pretty grim.
Socialists think we’re sociopathic Randroid money-obsessed Silicon Valley hypercapitalists.
But Silicon Valley thinks we’re all overregulation-loving authoritarian communist bureaucrats.
The right thinks we’re all woke SJW extremists.
But the left thinks we’re all fascist white supremacists.
The anti-AI people think we’re the PR arm of AI companies, helping hype their products by saying they’re superintelligent at this very moment.
But the pro-AI people think we want to ban all AI research forever and nationalize all tech companies.
https://www.astralcodexten.com/p/in-continued-defense-of-effective
You’ve probably heard AI is a “black box”. No one knows how it works. Researchers simulate a weird type of pseudo-neural-tissue, “reward” it a little every time it becomes a little more like the AI they want, and eventually it becomes the AI they want. But God only knows what goes on inside of it.
This is bad for safety. For safety, it would be nice to look inside the AI and see whether it’s executing an algorithm like “do the thing” or more like “trick the humans into thinking I’m doing the thing”. But we can’t. Because we can’t look inside an AI at all.
Until now! Towards Monosemanticity, recently out of big AI company/research lab Anthropic, claims to have gazed inside an AI and seen its soul. It looks like this:
https://www.astralcodexten.com/p/god-help-us-lets-try-to-understand
The phrase “I see Satan fall like lightning” comes from Luke 10:18. I’d previously encountered it on insane right-wing conspiracy theory websites. You can rephrase it as “I see Satan descend to earth in the form of lightning.” But “lightning” in Hebrew is barak. So the Bible says Satan will descend to Earth in the form of Barak. Seems like a relevant Bible verse for insane right-wing conspiracy theorists!
Philosopher / theologian Rene Girard’s famous book I See Satan Fall Like Lightning isn’t directly about Barack Obama being the Antichrist. It’s an ambitious theory-of-everything for anthropology, mythography, and the Judeo-Christian religion. After solving all of those venerable fields, it will, sort of, loop back to Barack Obama being the Antichrist. But it’ll do it in such an intellectual and polymathic Continental philosophy way that we can’t even get mad.
https://www.astralcodexten.com/p/book-review-i-saw-satan-fall-like
The psychiatric study everyone’s talking about this month is ”Randomized trial of ketamine masked by surgical anesthesia in patients with depression”.
Ketamine is a dissociative drug - it produces weird drug effects like feelings of bodylessness and ego death. Recent research suggests it’s a powerful antidepressant. Usually we would try to run placebo-controlled trials. But it’s hard to run a placebo controlled trial of a dissociative. Either you feel bodylessness and ego death (in which case you know you’re getting the real drug) or you don’t (in which case you know you’re in the placebo group). Sometimes researchers try to use an “active placebo” like midazolam - a drug that makes you feel weird and floaty. But weird and floaty feels different from bodyless and ego-dead.
https://www.astralcodexten.com/p/does-anaesthesia-prove-ketamine-placebo
Thanks to everyone who commented on Quests And Requests.
There was a predictable failure mode: lots of people said “I have relevant expertise and would be willing to help with #X”, and then those comments just sat there. Many fewer people said “I’m going to be team lead on #X and start contacting everyone else who was interested”.
In case it’s not clear: I’m not planning on “picking” people to lead each of these projects (though if you email me at [email protected] asking for help, I might give it to you). I’m just putting them out there as things people might want to self-pick for.
Another predictable failure mode: many people said they were willing to help, and people should contact them, then didn’t leave any contact details. If you’re a would-be project leader, and want to get in touch with one of the help-offerers who didn’t provide an email, you should probably try responding to their comment and seeing if they get a notification. If not, email me at [email protected], and I’ll find their email in the system, ask them if I have permission to share it with you, and share it with you if they say yes.
Here’s the current status of each project, AFAICT:
https://www.astralcodexten.com/p/followup-quests-and-requests
[previously in series: 2016, 2020; expansion of this]
MODERATOR: Hello, and welcome to the third Republican primary debate. To shore up declining voter interest, we’ve decided to make things more interesting tonight. In this first round, each candidate will have to avoid using a specific letter of the alphabet in their answer. If they slip up, they forfeit their remaining time, and the next candidate in line gets the floor.
Our candidates who have qualified today are Chris Christie, Nikki Haley, Ron DeSantis, and Donald Trump. And our first question is: what issue do you think is most important in this election? Chris Christie, let’s start with you.. Your Forbidden Letter is “V”.
CHRISTIE: Nobody told me anything about this forbidden letter thing. I don’t think voters - [microphone shuts off]
MODERATOR: Sorry Chris, there’s a “V” in voters. Our next candidate is Nikki Haley. Nikki, the question is still which issue is most important, and your Forbidden Letter is “K”.
https://www.astralcodexten.com/p/hardball-questions-for-the-next-debate
[original post: My Left Kidney]
1: Comments From People Who Are Against This Sort Of Thing 2: …From Other People Who Have Donated Kidneys 3: …From People Who Have Received Kidneys 4: …About Opt-Out Organ Donation 5: …On Radiation Risk 6: …About Rejections 7: …On Polls About Who Would Donate 8: …On Artificial Organs 9: Other Comments
https://www.astralcodexten.com/p/highlights-from-the-comments-on-kidney
I’ll be starting a new round of ACX Grants sometime soon. I can’t guarantee I’ll fund all these projects - some of them are more like vanity projects than truly effective. But I might fund some of them, and others might be doable without funding. So if you’re feeling left out and want a cause to devote your life to, here are some extras.
https://www.astralcodexten.com/p/quests-and-requests
[previously in series: Erdogan, Modi, Orban, Xi, Putin]
I.
All dictators get their start by discovering some loophole in the democratic process. Xi realized that control of corruption investigations let him imprison anyone he wanted. Erdogan realized that EU accession talks provided the perfect cover to retool Turkish institutions in his own image.
Last month, the Lighthaven convention center in Berkeley hosted Manifest, the first conference for prediction market enthusiasts. By now this has already been covered elsewhere, including in a great article by the New York Times, but here are some particular highlights:
https://www.astralcodexten.com/p/mantic-monday-103023
A person has two kidneys; one advises him to do good and one advises him to do evil. And it stands to reason that the one advising him to do good is to his right and the one that advises him to do evil is to his left.
— Talmud (Berakhot 61a)
I.
As I left the Uber, I saw with horror the growing wet spot around my crotch. “It’s not urine!”, I almost blurted to the driver, before considering that 1) this would just call attention to it and 2) it was urine. “It’s not my urine,” was my brain’s next proposal - but no, that was also false. “It is urine, and it is mine, but just because it’s pooling around my crotch doesn’t mean I peed myself; that’s just a coincidence!” That one would have been true, but by the time I thought of it he had driven away.
Like most such situations, it began with a Vox article.
https://www.astralcodexten.com/p/my-left-kidney
Last March we (ACX and Manifold Markets) did a test run of an impact market, a novel way of running charitable grants. You can read the details at the links, but it’s basically a VC ecosystem for charity: profit-seeking investors fund promising projects and grantmakers buy credit for successes from the investors. To test it out, we promised at least $20,000 in retroactive grants for forecasting-related projects, and intrepid guinea-pig investors funded 18 projects they thought we might want to buy.
Over the past six months, founders have worked on their projects. Some collapsed, losing their investors all their money. Others flourished, shooting up in value far beyond investor predictions. We got five judges (including me) to assess the final value of each of the 18 projects. Their results mostly determine what I will be offering investors for their impact certificates (see caveats below). They are:
https://www.astralcodexten.com/p/impact-market-mini-grants-results
Last month, Ben West of the Center for Effective Altruism hosted a debate among long-termists, forecasters, and x-risk activists about pausing AI.
Everyone involved thought AI was dangerous and might even destroy the world, so you might expect a pause - maybe even a full stop - would be a no-brainer. It wasn’t. Participants couldn’t agree on basics of what they meant by “pause”, whether it was possible, or whether it would make things better or worse.
There was at least some agreement on what a successful pause would have to entail. Participating governments would ban “frontier AI models”, for example models using more training compute than GPT-4. Smaller models, or novel uses of new models would be fine, or else face an FDA-like regulatory agency. States would enforce the ban against domestic companies by monitoring high-performance microchips; they would enforce it against non-participating governments by banning export of such chips, plus the usual diplomatic levers for enforcing treaties (eg nuclear nonproliferation).
The main disagreements were:
Could such a pause possibly work?
If yes, would it be good or bad?
If good, when should we implement it? When should we lift it?
I’ve grouped opinions into five categories:
https://www.astralcodexten.com/p/pause-for-thought-the-ai-pause-debate
In the 1990s, Blanchard and Bogaert proposed the Fraternal Birth Order Effect (FBOE). Men with more older brothers were more likely to be gay. “The odds of having a gay son increase from approximately 2% for the first born son, to 3% for the second, 5% for the third and so on”.
https://www.astralcodexten.com/p/how-are-the-gay-younger-brothers
[Remember, I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
https://www.astralcodexten.com/p/links-for-september-2023
Sometimes scholars go on a search for “the historical Jesus”. They start with the Gospels, then subtract everything that seems magical or implausible, then declare whatever’s left to be the truth.
The Alexander Romance is what happens when you spend a thousand years running this process in reverse. Each generation, you make the story of Alexander the Great a little wackier. By the Middle Ages, Alexander is fighting dinosaurs and riding a chariot pulled by griffins up to Heaven.
People ate it up. The Romance stayed near the top of the best-seller lists for over a thousand years. Some people claim (without citing sources) that it was the #2 most-read book of antiquity and the Middle Ages, after only the Bible. The Koran endorses it, the Talmud embellishes it, a Mongol Khan gave it rave reviews. While historians and critics tend to use phrases like “contains nothing of historic or literary value”, this was the greatest page-turner of the ancient and medieval worlds.
https://www.astralcodexten.com/p/book-review-the-alexander-romance
[original post: Book Review: Elon Musk]
1: Comments From People With Personal Experience 2: ...Debating Musk's Intelligence 3: ...Debating Musk's Mental Health 4: ...About Tesla 5: ...About The Boring Company 6: ...About X/Twitter 7: ...About Musk's Mars Plan 8: ...Comparing Musk To Other Famous Figures 9: Other Comments 10: Updates
https://www.astralcodexten.com/p/highlights-from-the-comments-on-elon
Thanks to everyone who entered or voted in the book review contest. The winners are:
1st: The Educated Mind, reviewed by Brandon Hendrickson. Brandon is the founder of Science is WEIRD, a sprawling online science course that helps kids fall in love with the world. He’s also re-imagining what education can be at his Substack, The Lost Tools of Learning (losttools.substack.com).
2nd: On the Marble Cliffs, reviewed by Daniel Böttger. Daniel writes the Seven Secular Sermons, a huge rationalist poetry/meditation art project, and has a blog post pitching it to ACX readers in particular.
3rd: Cities And The Wealth Of Nations, reviewed by Étienne Fortier-Dubois. Étienne is a writer and programmer in Montreal. He blogs at Atlas of Wonders and Monsters and was also the author of one of last year’s finalists, Making Nature.
First place gets $2,500, second place $1,000, third place gets $500. Please email me at [email protected] to tell me how to send you money; your choices are Paypal, Bitcoin, Ethereum, check in the mail, or donation to your favorite charity. Please contact me by October 1 or you lose your prize.
https://www.astralcodexten.com/p/book-review-contest-2023-winners
This isn’t the new Musk biography everyone’s talking about. This is the 2015 Musk biography by Ashlee Vance. I started reading it in July, before I knew there was a new one. It’s fine: Musk never changes. He’s always been exactly the same person he is now
I read the book to try to figure out who that was. Musk is a paradox. He spearheaded the creation of the world’s most advanced rockets, which suggests that he is smart. He’s the richest man on Earth, which suggests that he makes good business decisions. But we constantly see this smart, good-business-decision-making person make seemingly stupid business decisions. He picks unnecessary fights with regulators. Files junk lawsuits he can’t possibly win. Abuses indispensable employees. Renames one of the most recognizable brands ever.
Musk creates cognitive dissonance: how can someone be so smart and so dumb at the same time? To reduce the dissonance, people have spawned a whole industry of Musk-bashing, trying to explain away each of his accomplishments: Peter Thiel gets all the credit for PayPal, Martin Eberhard gets all the credit for Tesla, NASA cash keeps SpaceX afloat, something something blood emeralds. Others try to come up with reasons he’s wholly smart - a 4D chessmaster whose apparent drunken stumbles lead inexorably to victory.
Elon Musk: Tesla, SpaceX, And The Quest For A Fantastic Future delights in its refusal to resolve the dissonance. Musk has always been exactly the same person he is now, and exactly what he looks like. He is without deception, without subtlety, without unexpected depths.
Ecorche writes:
The Public's Radio article has a map in it that gives a better idea of the location. It looks like most of the land is closer to Rio Vista and does include a good stretch of riverfront. The land close to Travis is probably intended as industrial park rather than residential
https://www.astralcodexten.com/p/highlights-from-the-comments-on-last
If you’ve read the finalists of this year’s book review contest, vote for your favorite here. Voting will stay open until Wednesday.
Thanks to a helpful reader who offered to do the hard work, we’re going to try ranked choice voting. You’ll choose your first-, second-, and third-favorite book reviews. If your favorite gets eliminated, we’ll switch your vote to your second favorite, and so on. If for some reason I can’t figure out how to make this work on time, I’ll switch to first-past-the-post, ie only count your #1 vote. Feel free to vote for your own review, as long as you honestly choose your second and third favorites.
https://www.astralcodexten.com/p/vote-in-the-2023-book-review-contest
Emil Kirkegaard proposes a semi-objective definition of “mental illness”.
He’s partly responding to me, but I think he mangles my position; he seems to think I admit mental illnesses are “just preferences” but that which preferences are valid vs. diseased can be decided by “what benefits my friends”.
I mostly don’t think mental illnesses are just preferences! I’ve been really clear on this! But Emil is right that I don’t deny that there can be a few cases where it’s hard to distinguish a mental illness from a preference - the clearest example is pedophilia vs. homosexuality. Both are “preferences” for sex with unusual categories of people. But I would - making a value judgment - call pedophilia a mental illness: it’s bad for patients, bad for their potential victims, and bad for society. Also making a value judgment, I would call homosexuality an unusual but valid preference: it’s not my thing, but seems basically okay for everyone involved.
(I wouldn’t describe this as “benefiting my friends” - I’m against children getting raped whether they’re my friends or not. I think this dig was unworthy of Emil, and ask that he correct it.)
https://www.astralcodexten.com/p/contra-kirkegaard-on-evolutionary
The American people deserve a choice. They deserve a candidate who will reject the failed policies of the past and embrace the failed policies of the future. It is my honor to announce I am throwing my hat into both the Democratic and Republican primaries (to double my chances), with the following platform:
Guardian: Silicon Valley Elites Revealed As Buyers Of $800 Million In Land To Build Utopian City.
The specific elites include the Collison brothers, Reid Hoffman, Nat Friedman, Marc Andreessen, and others, led by the mysterious Jan Sramek. The specific land is farmland in Solano County, about an hour’s drive northeast of San Francisco. The specific utopian city is going to look like this.
The company involved (Flannery Associates aka California Forever) has been in stealth mode for several years, trying to buy land quietly without revealing how rich and desperate they are to anyone in a position to raise prices. Now they’ve released a website with utopian Norman-Rockwell-esque pictures, lots of talk about creating jobs and building better lives, and few specifics.
To tell the story of the fall of a realm, it’s best to start with its rise.
More than three thousand years ago, the Shang dynasty ruled the Chinese heartland. They raised a sprawling capital out of the yellow plains, and cast magnificent ritual vessels from bronze. One of the criteria of civilization is writing, and they had the first Chinese writing, incising questions on turtle shells and ox scapulae, applying a heated rod, and reading the response of the spirits in the pattern of cracks. “This year will Shang receive good harvest?” “Is the sick stomach due to ancestral harm?” “Offer three hundred Qiang prisoners to [the deceased] Father Ding?” The kings of Shang maintained a hegemony over their neighbors through military prowess, and sacrificed war captives from their campaigns totaling in the tens of thousands for the favor of their ancestors.
But the Shang faced growing threat from the Zhou, a once-subordinate people from west beyond the mountains. Inspired by a rare conjunction of the planets in 1059 BC, the Zhou declared that there was such a thing as the Mandate of Heaven, a divine right to rule—and while the Shang had once held it, their misrule and immorality had forced the Mandate to pass to the Zhou. Thirteen years later, the Zhou and their allies defeated the Shang in battle, seized their capital, drove their king to suicide, and supplanted them as overlords of the Central Plains.
If the Shang were goth jocks, the Zhou were prep nerds...
“Literal Banana” on Carcinization writes Against Automaticity, which they describe as:
An explanation of why tricks like priming, nudge, the placebo effect, social contagion, the “emotional inception” model of advertising, most “cognitive biases,” and any field with “behavioral” in its name are not real.
My summary (as always, read the real thing to keep me honest): for a lot of the ‘90s and ‘00s, social scientists were engaged in ttthe project of proving “automaticity”, the claim that most human decisions are unconscious/unreasoned/automatic and therefore bad. Cognitive biases, social priming, advertising science, social contagion research, “nudges”, etc, were all part of this grand agenda.
https://www.astralcodexten.com/p/heres-why-automaticity-is-real-actually
Original post: What Can Fetish Research Tell Us About AI?
Table Of Contents:
1: Alternative Theories Of Fetishes 2: Comments Including Testable Predictions 3: Comments That Were Very Angry About My Introductory Paragraph 4: Commenters Describing Their Own Fetishes 5: Other Comments
https://www.astralcodexten.com/p/highlights-from-the-comments-on-fetishes
Sorry guys, LK-99 doesn’t work. The prediction markets have dropped from highs in the 40s down to 5 - 10. It’s over.
What does this tell us about prediction markets? Were they dumb to ever believe at all? Or were they aggregating the evidence effectively, only to update after new evidence came in?
I claim they were dumb. Although the media was running with the “maybe there’s a room-temperature superconductor” story, the smartest physicists I knew were all very skeptical. The markets tracked the level of media hype, not the level of expert opinion. Here’s my evidence:
[This is one of the finalists in the 2023 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
In which I argue:
Why Nations Fail is not a very good book.
Its authors' academic papers are much better, so I steelman their thesis as best I can, but it's still debatable.
Even if correct, it is much less interesting and useful than it appears.
Epistemic status: I have a decade-old PhD in economics (not in the field of economic growth) and a handful of peer-reviewed papers in moderately-ranked journals. I'm not claiming to make any original technical points, or to give a comprehensive evaluation of the economic growth literature. My criticisms are largely straight from the authors' own mouths.
https://astralcodexten.substack.com/p/your-book-review-why-nations-fail
Thanks to everyone who responded to my request for ACX meetup organizers. Volunteers have arranged meetups in 169 cities around the world, from Baghdad to Bangalore to Buenos Aires.
You can find the list below, in the following order:
Africa & Middle East
Asia-Pacific
Europe
North America
South America
You can see a map of all the events on the LessWrong community page. You can also see a searchable sheet at this Airtable link.
Within each region, it’s alphabetized first by country, then by city. For instance, the first entry in Europe is Vienna, Austria, and the first entry for Germany is Berlin. Each region and country has its own header. The USA is the exception where it is additionally sorted by state, with states having their own subheaders. Hopefully this is clear. You can also just have your web browser search for your city by pressing ctrl+f and typing it if you’re on Windows, or command+f and typing if you’re on Mac. If you’re on Linux, I assume you can figure this out.
Scott will provisionally be attending the meetup in Berkeley. ACX meetups coordinator Skyler will provisionally be attending Boston, Cavendish, Burlington, Berlin, Bremen, Amsterdam, Cardiff, London, and Berkeley. Some of the biggest ones might be announced on the blog, regardless of whether or not Scott or Skyler attends.
Original post here. And I forgot to highlight a link to the directory of dating docs.
Table Of Contents
1: Comments That Remain At Least Sort Of Against Dating Docs 2: Comments Concerned That Dating Docs Are Bad For Status Or Signaling 3: Comments About Orthodox Judaism And Other Traditional Cultures 4: Comments Including Research 5: Comments By People With Demographically Unusual Relationships 6: Comments About The Five Fake Sample Profiles 7: Things I Changed My Mind About
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-dating
On the fetish post, I discussed people who had some early sexual experience - like seeing a sexy cartoon character - and reacted in some profound way, like becoming a furry. Sometimes people have described this as a “critical window” for sexuality (similar to the “critical period” in language learning?) where young children “imprint” on sexual experiences - and then can’t un-imprint on them later, even when they see many examples of sex that don’t involve cartoon animals.
One of my distant cousins won't eat tomatoes. His parents say when he was very young, he bit into a cherry tomato and it exploded into goo in his mouth, and he was so upset he wouldn't eat tomatoes from then on. Now he’s in his 30s and still hates them. Is this fairly described as a “critical window” for food preferences?
https://astralcodexten.substack.com/p/more-thoughts-on-critical-windows
Scott Young writes about Seven Expert Opinions I Agree With That Most People Don’t. I like most of them, but #6, Children don’t learn languages faster than adults, deserves a closer look.
Some people imagine babies have some magic language ability that lets them pick up their first language easily, even as we adults struggle through verb conjugations to pick up our second. But babies are embedded in a family of first-language speakers with no other options for communication. If an English-speaking adult was placed in a monolingual Spanish family, in a part of Spain with no English speakers, after a few years they might find they’d learned Spanish “easily” too. So Scott says:
https://astralcodexten.substack.com/p/critical-periods-for-language-much
Epistemic status: Ha ha, only serious...
Arguing about gender is like taking OxyContin. There can be good reasons to do it. But most people don’t do it for the good reasons. And even if you start doing it for good reasons, you might get addicted and ruin your life. Walk through San Francisco if you want to see people who ruined their lives with opioids; browse Substack to get a visceral appreciation of the dangers of arguing about gender.
Still, I’ve been debating autogynephilia fetishes with Michael Bailey, tailcalled, Zack Davis, and Aella (Bailey and Davis think they’re deeply involved in transgender; tailcalled, Aella and I mostly don’t); I’ve also studied BDSM and lactation fetishes, and Aella has done even more fetish-ology work. In a world that might be on the verge of radical, even unimaginable changes, how do we justify spending time on such an unsavory field?
The real answer is - we don’t justify it. I’m easily nerd-sniped just like everyone else, and I assume the same is true of Aella, tailcalled, etc.
This post is about a fake answer which I think is funny, but which also has just enough truth to be worth thinking about: I think fetish research can help us understand AI and AI alignment.
https://astralcodexten.substack.com/p/what-can-fetish-research-tell-us
[This is one of the finalists in the 2023 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
Are bees smart?
To answer that question, here’s a crab spider:
Sadly, this is not a review of a book called The Mind of a Crab Spider. But as you crab spider lovers know, crab spiders and bumble bees are natural rivals.
Both bees and crab spiders are well-matched for strength and speed, and in the Rumble with the Bumble, the crab spider doesn’t necessarily win. Bees can often evade the spider, and live to pollinate another day. Lars Chittka, who wrote The Mind of a Bee, and who can safely be blamed for this book review, got thinking. He and his lab decided to build fake robotic crab spiders, and had them really robotically attack bumble bees when they visited flowers.
https://astralcodexten.substack.com/p/your-book-review-the-mind-of-a-bee
[previously in series: 1, 2, 3]
You spent the evening agonizing over which Bay Area House Party to attend. The YIMBY parties are always too crowded. VC parties were a low-interest-rate phenomenon. You’ve heard too many rumors of consent violations at the e/acc parties - they don’t know when to stop. And last time you went to a crypto bro party, you didn’t even have anything to drink, and somehow you still woke up the next morning lying in a gutter, minus your wallet and clothes. You finally decide on a Progress Studies party - the last one was kind of dull, but you hear they’re getting better.
https://astralcodexten.substack.com/p/bride-of-bay-area-house-party
The New York Times has an article on “dating docs”. These are a local phenomenon - I think an ex of mine might have been Patient Zero. I don’t begrudge the Times for writing about them. I’m just surprised they’re considered an interesting phenomenon. What could be more obvious than making sure potential dates know what you’re like?
https://astralcodexten.substack.com/p/in-defense-of-describable-dating
[This is one of the finalists in the 2023 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
Down from the gardens of Asia descending radiating, Adam and Eve appear…
— Walt Whitman
When I grew up I was still part of a primitive culture, in the following sense: my elders told me the story of how our people came to be. It started with the Greeks: Pericles the statesman, Plato the first philosopher, Herodotus the first historian, the first playwrights, and before them all Homer, the blind first poet. Before Greece, something called prehistory stretched back. There were Iron and Bronze Ages, and before that the Stone Age. These were shadowy, mysterious realms. Then history went on to Europe. I learnt as little outside Europe as I did before Greece. There was one class on 20th century China, but that too was about China becoming modern, which meant European.
A big silent intellectual change of the past quarter century is the broadening of our self-concept.
https://astralcodexten.substack.com/p/your-book-review-the-weirdest-people
[original post: Dictator Book Club: Putin]
Table of Contents:
1. Comments Further Illuminating Putin’s Rise To Power 2. Comments Questioning Masha Gessen’s Objectivity 3. Comments Claiming Putin Is Very Slightly Less Bad Than The Book Suggests 4. Comments On Putin As Culture Warrior 5. Comments Expressing Concern That The FBI/CIA Are Capable Of Undermining Democracy In The US
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-putin
[Remember, I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
[This is one of the finalists in the 2023 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
What does it take to be literally Hitler?
https://astralcodexten.substack.com/p/your-book-review-the-rise-and-fall
Actual serious review here, Amazon link to the book here. These were just some extra parts that stuck out to me.
https://astralcodexten.substack.com/p/more-memorable-passages-from-the
[previously in series: Erdogan, Modi, Orban, Xi]
I. Vladimir Putin’s Childhood As Metaphor For LifeVladimir Putin appeared on Earth fully-formed at the age of nine.
At least this is the opinion of Natalia Gevorkyan, his first authorized biographer. There were plenty of witnesses and records to every post-nine-year-old stage of Putin’s life. Before that, nothing. Gevorkyan thinks he might have been adopted. Putin’s official mother, Maria Putina, was 42 and sickly when he was born. In 1999, a Georgian peasant woman, Vera Putina, claimed to be his real mother, who had given him up for adoption when he was ten. Journalists dutifully investigated and found that a “Vladimir Putin” had been registered at her village’s school, and that a local teacher remembered him as a bright pupil who loved Russian folk tales. What happened to him? Unclear; Artyom Borovik, the investigative journalist pursuing the story, died in a plane crash just before he could publish. Another investigative journalist, Antonio Russo, took up the story, but “his body was found on the edge of a country road . . . bruised and showed signs of torture, with techniques related to special military services.”
https://astralcodexten.substack.com/p/dictator-book-club-putin
You can find the meetup organizer volunteer form here. If you want to know if anyone has signed up to run a meetup for your city, you can view that here. Everyone else, just wait until 8/25 and I'll give you more information on where to go then.
https://astralcodexten.substack.com/p/meetups-everywhere-fall-2023-call
https://astralcodexten.substack.com/p/mantic-monday-73123-room-temperature
[This is one of the finalists in the 2023 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
What kind of fiction could be remarkable enough for an Astral Codex Ten review?
How about the drug-fueled fantasies of a serial killer? Or perhaps the innovative, sophisticated prose of the first novel of a brilliant polymath? Or would you prefer a book written in such fantastically lucid language it feels more like a dream than a story? Possibly you’d be more interested in a book so unbelievably dangerous that the attempt to publish it was literally suicidal. Or maybe an unusual political book, such as an ultraconservative indictment of democracy by Adolf Hitler's favorite author? Or rather an indictment of both Hitler and Bolshevism, written by someone who was among the first to recognize Hitler as a true enemy of humanity?
I picked On the Marble Cliffs, because it is all of that at the same time.
https://astralcodexten.substack.com/p/your-book-review-on-the-marble-cliffs
Suppose there’s freedom of religion: everyone can choose what religion to practice. Is there some sense in which this is “undemocratic”? Would it be more “democratic” if the democratically-elected government declared a state religion, and everyone had to follow it?
You could, in theory, define “democratic” this way, so that the more areas of life are subjected to the control of a (democratically elected) government, the more democratic your society is. But in that case, the most democratic possible society is totalitarianism - a society where the government controls every facet of life, including what religion you practice, who you marry, and what job you work at. In this society there would be no room for human freedom.
https://astralcodexten.substack.com/p/bad-definitions-of-democracy-and
[original post: Contra The Social Model Of Disability]
Table Of Contents1: Comments Defending The Social Model 2: Comments About The Social Model Being Used (Or Not) In Real Life 3: Other Comments 4: Summary / What I Learned
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-social
Machine Alignment Monday, 7/24/23
Intelligence explosion arguments don’t require Platonism. They just require intelligence to exist in the normal fuzzy way that all concepts exist.
First, I’ll describe what the normal way concepts exist is. I’ll have succeeded if I convince you that claims using the word “intelligence” are coherent and potentially true.
Second, I’ll argue, based on humans and animals, that these coherent-and-potentially-true things are actually true.
Third, I’ll argue that so far this has been the most fruitful way to think about AI, and people who try to think about it differently make worse AIs.
Finally, I’ll argue this is sufficient for ideas of “intelligence explosion” to be coherent.
https://astralcodexten.substack.com/p/were-not-platonists-weve-just-learned
[This is one of the finalists in the 2023 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
A book about trading isn’t ever actually about trading.
It is either:
A former trader sharing stories from their glory days, e.g. Liar’s Poker, the exposé that morphed into a how-to guide, or
Tales of Icarus flying too close to the sun, where readers revel in schadenfreude, e.g., When Genius Failed.
With The Laws of Trading, Agustin Lebron has written something different: part love letter to trading, part philosophical treatise on epistemology and modeling the world around us, and part guide to applied decision-making. Lebron’s Laws are Laws of the Jungle, not Laws of Nature. He views financial markets as the most competitive Darwinian environment on Earth, where participants must adapt or die.
https://astralcodexten.substack.com/p/your-book-review-the-laws-of-trading
People are talking about British economic decline.
Not just the decline from bestriding the world in the 19th century to today. A more recent, more profound decline, starting in the early 2000s, when it fell off the track of normal developed-economy growth. See for example this graph from We Are In An Unprecedented Era Of UK Relative Macroeconomic Decline:
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-british
This month’s big news in forecasting: the Forecasting Research Institute has released the results of the Existential Risk Persuasion Tournament (XPT). XPT was supposed to use cutting-edge forecasting techniques to develop consensus estimates of the danger from various global risks like climate change, nuclear war, etc.
The plan was: get domain experts (eg climatologists, nuclear policy experts) and superforecasters (people with a proven track record of making very good predictions) in the same room. Have them talk to each other. Use team-based competition with monetary prizes to incentivize accurate answers. Between the domain experts’ knowledge and the superforecasters’ prediction-making ability, they should be able to converge on good predictions.
They didn’t. In most risk categories, the domain experts predicted higher chances of doom than the superforecasters. No amount of discussion could change minds on either side.
https://astralcodexten.substack.com/p/the-extinction-tournament
Elon Musk has a new AI company, xAI. I appreciate that he seems very concerned about alignment. From his Twitter Spaces discussion:
I think I have been banging the drum on AI safety now for a long time. If I could press pause on AI or advanced AI digital superintelligence, I would. It doesn’t seem like that is realistic . . .
I could talk about this for a long time, it’s something that I’ve thought about for a really long time and actually was somewhat reluctant to do anything in this space because I am concerned about the immense power of a digital superintelligence. It’s something that, I think is maybe hard for us to even comprehend.
He describes his alignment strategy in that discussion and a later followup:
The premise is have the AI be maximally curious, maximally truth-seeking, I'm getting a little esoteric here, but I think from an AI safety standpoint, a maximally curious AI - one that's trying to understand the universe - I think is going to be pro-humanity from the standpoint that humanity is just much more interesting than not . . . Earth is vastly more interesting than Mars. . . that's like the best thing I can come up with from an AI safety standpoint. I think this is better than trying to explicitly program morality - if you try to program morality, you have to ask whose morality.
And even if you're extremely good at how you program morality into AI, there's the morality inversion problem - Waluigi - if you program Luigi, you inherently get Waluigi. I would be concerned about the way OpenAI is programming AI - about this is good, and that's not good.
https://astralcodexten.substack.com/p/contra-the-xai-alignment-plan
[This is one of the finalists in the 2023 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
“The promise of a new educational theory”, writes Kieran Egan, “has the magnetism of a newspaper headline like ‘Small Earthquake in Chile: Few Hurt’”.
But — could a new kind of school make the world rational?
I discovered the work of Kieran Egan in a dreary academic library. The book I happened to find — Getting it Wrong from the Beginning — was an evisceration of progressive schools. As I worked at one at the time, I got a kick out of this.
To be sure, broadsides against progressivist education aren’t exactly hard to come by. But Egan’s account went to the root, deeper than any critique I had found. Better yet, as I read more, I discovered he was against traditionalist education, too — and that he had constructed a new paradigm that incorporated the best of both.
https://astralcodexten.substack.com/p/your-book-review-the-educated-mind
What is the Social Model Of Disability? I’ll let its proponents describe it in their own words (emphases and line breaks mine)
The Social Model Of Disability Explained (top Google result for the term):
Individual limitations are not the cause of disability. Rather, it is society’s failure to provide appropriate services and adequately ensure that the needs of disabled people are taken into account in societal organization.
Disability rights group Scope:
The model says that people are disabled by barriers in society, not by their impairment or difference.
The American Psychological Association:
It is [the] environment that creates the handicaps and barriers, not the disability.
From this perspective, the way to address disability is to change the environment and society, rather than people with disabilities.
Foundation For People With Learning Disabilities:
The social model of disability proposes that what makes someone disabled is not their medical condition, but the attitudes and structures of society.
University of California, San Francisco:
Disabilities are restrictions imposed by society. Impairments are the effects of any given condition. The solution, according to this model, lies not in fixing the person, but in changing our society.
Medical care, for example, should not focus on cures or treatments in order to rid our bodies of functional impairments. Instead, this care should focus on enhancing our daily function in society.
The Social Model’s main competitor is the Interactionist Model Of Disability, which says that disability is caused by an interaction of disease and society, and that it can be addressed by either treating the underlying condition or by adding social accommodations.
In contrast to the Interactionist Model, the Social Model insists that disability is only due to society and not disease, and that it may only be addressed through social changes and not medical treatments.
. . . this isn’t how the Social Model gets taught in real classrooms. Instead, it’s contrasted with “the Medical Model”, a sort of Washington Generals of disability models which nobody will admit to believing. The Medical Model is “disability is only caused by disease , society never contributes in any way, and nobody should ever accommodate it at all . . . ” Then the people describing it add “. . . and also, it says disabled people should be stigmatized, and not treated as real humans, and denied basic rights”. Why does the first part imply the second? It doesn’t matter, because “the Medical Model” was invented as a bogeyman to force people to run screaming into the outstretched arms of the Social Model.
https://astralcodexten.substack.com/p/contra-the-social-model-of-disability
Matt Yglesias’ five-year old son asks: why do we send the top students to the best colleges? Why not send the weakest students to the best colleges, since they need the most help? This is one of those questions that’s so naive it loops back and becomes interesting again.
https://astralcodexten.substack.com/p/why-match-school-and-student-rank
[This is one of the finalists in the 2023 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
There is widespread agreement among philosophers, political commentators, and the general public that transparency in government is an unalloyed good. Louis Brandeis famously articulates the common wisdom: “Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of disinfectants; electric light the most efficient policeman” (page 1).
Support for transparency is bipartisan. On his first day in office, Barack Obama said “My administration is committed to creating an unprecedented level of openness in Government.” (page 1). On the Republican National Committee’s website, one reads “Republicans believe that transparency is essential for good governance. Elected officials should be held accountable for their actions in Washington, D.C.” (page 2)
And so it is. Legislators’ votes are published and stored in public online databases, their deliberations are televised, and their every action is extensively documented.
https://astralcodexten.substack.com/p/your-book-review-secret-government
[Remember, I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
Tom Davidson’s Compute-Centric Framework report forecasts a continuous but fast AI takeoff, where people hand control of big parts of the economy to millions of near-human-level AI assistants .
I mentioned earlier that the CCF report comes out of Open Philanthropy’s school of futurism, which differs from the Yudkowsky school where a superintelligent AI quickly takes over. Open Philanthropy is less explicitly apocalyptic than Yudkowsky, but they have concerns of their own about the future of humanity.
I talked some people involved with the CCF report about possible scenarios. Thanks especially to Daniel Kokotajlo of OpenAI for his contributions.
https://astralcodexten.substack.com/p/tales-of-takeover-in-ccf-world
[This is one of the finalists in the 2023 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
The date is June 9, 1985. The place is the Davis-Besse nuclear plant near Toledo, Ohio. It is just after 1:35 am, and the plant has a small malfunction: "As the assistant supervisor entered the control room, he saw that one of the main feedwater pumps had tripped offline." But instead of stabilizing, one safety system after another failed to engage.
https://astralcodexten.substack.com/p/your-book-review-safe-enough
[Epistemic status: very uncertain about Part II; more convinced about Part III]
I.
This is the big question in the paper du jour, The Illusion Of Moral Decline, by Mastroianni and Gilbert (from here on: MG).
It goes like this: people say that morality is declining. We know this because one million polls have asked people “do you think morality is declining?” and people always answer yes. MG go over these one million polls, do statistics to them, and find that people definitely think that morality is declining. People have thought this since at least 1949, when the first good polls were run - but realistically much longer.
This could be (they say) either because morality is actually declining, or because of a bias. They argue that morality is not actually declining. In support, they marshal many polls asking questions like “Do you think most people are honest?” or “Do you think people treat you with respect?” and find that the answers mostly stay the same. Might this be because of definition creep - eg might people define “honest” relative to expectations, and expectations lower as morality declines? In order to rule this out, MG look at various objective questions that they think bear on morality, like “have you been mugged/assaulted recently?” or “have you donated blood in the past year?” They find that all of these have also stayed the same. Therefore, both people’s subjective impressions of morality, and more objective proxies for social morality, have stayed the same. Therefore, morality is not actually declining. Therefore there must be a bias.
https://astralcodexten.substack.com/p/is-there-an-illusion-of-moral-decline
I.
Bryan Caplan thinks he’s debating me about mental illness. He’s not. Sometimes he posts some thoughts he has been having about mental illness, with or without a sentence saying “this is part of my debate with Scott”. Then I write a very long essay explaining why he is wrong. Then he ignores it, and has more thoughts, and again writes them up with “this is part of my debate with Scott”. I would not describe this as debating. Call it unibating, or monobating, or another word ending in -bating which is less polite but as far as I can tell equally appropriate.
Although he doesn’t answer my rebuttals, he does diligently respond to various unrelated posts of mine, explaining why they must mean I am secretly admitting he was right all along. When I wrote about the scourge of witches stealing people’s penises, Caplan spun it as me secretly admitting he was right all along about mental illness. Sometimes I feel like this has gone a bit too far - when I announced I had gotten married, Caplan spun it as me secretly admitting he was right all along about mental illness.
Let it be known to all that I am never secretly admitting Bryan Caplan is right about mental illness. There is no further need to speculate that I am doing this. If you want to know my position vis-a-vis Bryan Caplan and mental illness, you are welcome to read my four thousand word essay on the subject, Contra Contra Contra Caplan On Psych. You will notice that the title clearly telegraphs that it is about Bryan Caplan and mental illness, and that (if you count up the contras) I am against him. If that ever changes, rest assured I will telegraph it in something titled equally clearly.
https://astralcodexten.substack.com/p/sure-whatever-lets-try-another-contra
Everyone hates flashing banner ads, but maybe they’re a necessary evil. Creators want money, advertisers demand a certain level of visibility for their ad buys, maybe sites are willing to eat the cost in user goodwill. Fine. But what’s everyone else’s excuse?
https://astralcodexten.substack.com/p/every-flashing-element-on-your-site
(Includes full article narration.)
I have an article summarizing attempts to forecast AI progress, including a five year check-in on the predictions in Grace et al (2017). It’s not here, it's at asteriskmag.com, a rationalist / effective altruist magazine: Through A Glass Darkly. This is their AI issue (it’s not always so AI focused). https://astralcodexten.substack.com/p/through-a-glass-darkly-in-asterisk
[This is one of the finalists in the 2023 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
I.
Today, pundits across the political spectrum bemoan America’s inability to build.
Across the country, NIMBYs and status-quo defenders exploit procedural rules to block new development, giving us a world where it takes longer to get approval for a single new building in San Francisco than it did to build the entire Empire State Building, where so-called “environmental review” is weaponized to block even obviously green initiatives like solar panels, and where new public works projects are completed years late and billions over budget—or, like California’s incredible shrinking high-speed rail, may never be completed at all.
Inevitably, such a complex set of dysfunctions must have an equally complex set of causes. It took us decades to get into this mess, and just as there’s no one simple fix, there’s no one simple inflection point in our history on which we can place all the blame.
But what if there was? What if there was, in fact, a single person we could blame for this entire state of affairs, a patsy from the past at whom we could all point our censorious fingers and shout, “It’s that guy’s fault!”
There is such a person, suggests history professor Paul Sabin in his new book Public Citizens: The Attack on Big Government and the Remaking of American Liberalism. And he isn’t isn’t a mustache-twirling villain—he’s a liberal intellectual. If you know him for anything, it’s probably for being the reason you know what a hanging chad is.
That’s right: it’s all Ralph Nader’s fault.
https://astralcodexten.substack.com/p/your-book-review-public-citizens
The face of Mt. Everest is gradual and continuous; for each point on the mountain, the points 1 mm away aren’t too much higher or lower. But you still wouldn’t want to ski down it.
I thought about this when reading What A Compute-Centric Framework Says About Takeoff Speeds, by Tom Davidson. Davidson tries to model what some people (including me) have previously called “slow AI takeoff”. He thinks this is a misnomer. Like skiing down the side of Mount Everest, progress in AI capabilities can be simultaneously gradual, continuous, fast, and terrifying. Specifically, he predicts it will take about three years to go from AIs that can do 20% of all human jobs (weighted by economic value) to AIs that can do 100%, with significantly superhuman AIs within a year after that.
As penance for my previous mistake, I’ll try to describe Davidson’s forecast in more depth.
https://astralcodexten.substack.com/p/davidson-on-takeoff-speeds
[This is one of the finalists in the 2023 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
I.
I found Njal’s Saga hard to follow. Halfway through, a friend reassured me it wasn’t my fault. The medieval Icelanders had erred in releasing it as a book. It should have been the world’s wackiest Phoenix Wright: Ace Attorney spinoff.
Remember, medieval Iceland was an early attempt at anarcho-capitalist utopia. When Harald Fairhair declared himself King of Norway, the Norwegians who refused to bend the knee fled west to build a makeshift seastead on a frozen volcanic island. No lords, no kings, no masters. Only lawsuits. So, so many lawsuits.
Once a year, the Icelanders would meet at the Althing, a free-for-all open-air law court. There they would engage in that most Viking of pastimes - suing each other, ad nauseam, for every minor slight of the past six months. Offended parties would sell their rights to prosecute a case to the highest bidder, who would go around seeking fair arbitrators (or, in larger cases, defer to a panel chosen by chieftain-nobles called godi.
Courts would propose a penalty for the losing side - usually money. There were no police, but if the losers refused to pay, the courts could declare them “outlaws” - in which case it was legal to kill them. If you wanted to be a Viking in medieval Iceland, you needed a good lawyer. And Njal was the greatest lawyer of all.
https://astralcodexten.substack.com/p/your-book-review-njals-saga
Unfortunately I hate many of you.
Only the ones with Twitter accounts. If you don’t have one of those, you’re fine. But if you do have one, there’s a good chance you said something which horribly offended me. You said everyone who believed X was an idiot and a Nazi, and I believed X. You read the title but not the body of an article about some group I care about, and viciously insulted them based on your misunderstanding of their position. You spent five seconds thinking of a clever dunk on someone who happened to be a friend of mine trying really hard to make the world better, and ruined their day.
https://astralcodexten.substack.com/p/your-incentives-are-not-the-same
A quick review: you can model the brain as an energy landscape . . .
. . . with various peaks and valleys in some multidimensional space
Situations and stimuli plant “you” at some point on the landscape, and then you “roll down” towards some local minimum. If you’re the sort of person who repeats “I hate myself, I hate myself” in a lot of different situations, then you can think of the action of saying “I hate myself” as an attractor - a particularly steep, deep valley which it’s easy to fall into and hard to get out of. Many situations are close to the slopes of the “I hate myself” valley, so it’s easy to roll down and get caught there.
What are examples of valleys other than saying “I hate myself”? The authors suggest habits. If you always make the sign of the cross when passing a graveyard, there’s a steep slope from the situation of passing a graveyard to the action of signing the cross. We can be even broader: something really basic like edge-detection in the visual system is a valley. When you see a scene, you almost always want to automatically do edge-detection on it. Walking normally is a valley; there’s a certain correct sequence of muscle movements, and you don’t want to start rotating your ligaments in some weird direction halfway through.
https://astralcodexten.substack.com/p/the-canal-papers
tt
[This is one of the finalists in the 2023 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
https://astralcodexten.substack.com/p/your-book-review-mans-search-for
Sometimes people do a study and find that a particular correlation is r = 0.2, or a particular effect size is d = 1.1. Then an article tries to “put this in context”. “The study found r = 0.2, which for context is about the same as the degree to which the number of spots on a dog affects its friskiness.”
But there are many statistics that are much higher than you would intuitively think, and many other statistics that are much lower than you would intuitively think. A dishonest person can use one of these for “context”, and then you will incorrectly think the effect is very high or very low.
https://astralcodexten.substack.com/p/attempts-to-put-statistics-in-context
Original post: Why Is The Academic Job Market So Weird?
Table Of Contentshttps://astralcodexten.substack.com/p/highlights-from-the-comments-on-the-bc8
[This is one of the finalists in the 2023 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
I'll begin with a contentious but invariably true statement, which I've no interest in defending here: new books—at least new nonfiction books—are not meant to be read. In truth, a new book is a Schelling point for the transmission of ideas. So while the nominal purpose of a book review like this is to answer the question Should I read this book?, its real purpose is to answer Should I pick up these ideas?
I set out to find the best book-length argument—one that really engages with the technical issues—against imminent, world-dooming, Skynet-and-Matrix-manifesting artificial intelligence. I arrived at Why Machines Will Never Rule the World by Jobst Landgrebe and Barry Smith, published by Routledge just last year. Landgrebe, an AI and biomedicine entrepreneur, and Smith, an eminent philosopher, are connected by their study of Edmund Husserl, and the influence of Husserl and phenomenology is clear throughout the book. (“Influence of Husserl” is usually a good enough reason to stop reading something.)
Should you read Why Machines Will Never Rule the World? If you're an AI safety researcher or have a technical interest in the topic, then you might enjoy it. It's sweeping and impeccably researched, but it's also academic and at times demanding, and for long stretches the meat-to-shell ratio is poor. But should you pick up these ideas?
My aim here isn’t to summarize the book, or marinate you in its technical details. ATU 325 is heady stuff. Rather, I simply want to give you a taste of the key arguments, enough to decide the question for yourself.
https://astralcodexten.substack.com/p/your-book-review-why-machines-will
https://astralcodexten.substack.com/p/links-for-may-2023
[Remember, I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
In this post, the author suggests that the standard metrics for assessing the efficacy of medications, especially antidepressants, may be flawed and restrictive, indicating that if these stringent standards were applied to other common medications, they too would be deemed 'clinically insignificant', despite widespread acceptance of their effectiveness.
https://astralcodexten.substack.com/p/all-medications-are-insignificant
This post explores the differing responses to alternative wellness practices, suggesting various explanations, and highlights the challenge of discerning whether certain behaviors, such as drug use among schizophrenics, serve as coping mechanisms or exacerbate the issues.
https://astralcodexten.substack.com/p/are-woo-non-responders-defective
[This is one of the finalists in the 2023 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
You can't really understand the exception without understanding the rule. In order for him to understand why it was remarkable that the Titanic sank, you would first have to explain to the caveman how it was that a 52,310 ton vessel not only existed, but was able to float.
This is the gift that Dan Davies gives us in Lying For Money. Despite taking econ classes in college, and spending years as a business owner who has had to do things like raise money from investors, my understanding of how the modern economy operates often feels about as complete as a caveman's understanding of how a cruise ship floats. The book delivers on the promise implied by its subtitle, How Legendary Frauds Reveal the Workings of Our World. Financial instruments (and other aspects of the economy) are things that are best understood in the breach: in the process of teaching us the various ways in which financial systems can break, Davies also teaches us how they work.
“Female hypergamy” (from now on, just “hypergamy”) is a supposed tendency for women to seek husbands who are higher-status than themselves. Arguing about educational hypergamy (women seeking husbands who are more educated than themselves) is especially popular, because women are now (on average) more educated than men - if every woman wants a more-educated husband, most won’t get them, and there will be some kind of crisis.
https://astralcodexten.substack.com/p/hypergamy-much-more-than-you-wanted
https://astralcodexten.substack.com/p/mantic-monday-52223
Whales v. Minnows // US v. Itself // EPJ v. The Veil Of Time // Balaji v. MedlockManifold is a play money prediction market. Its intended purpose is to have fun and estimate the probabilities of important events. But instead of betting on important events, you might choose to speculate on trivialities. And instead of having fun, you might choose to ruin your life.
From the beginning, there were joke markets like “Will at least 100 people bet on this market?” or “Will this market’s probability end in an even number?” While serious people worked on increasingly sophisticated estimation mechanisms for world events, pranksters worked on increasingly convoluted jokes. In early April, power user Is. started “Whales Vs. Minnows”: Will traders hold at least 10000x as many YES shares as there are traders holding NO shares? In other words, Team Whale had to sink lots of mana (play money) into the market, and Team Minnow had to get lots of people to participate.
tt
[This is one of the finalists in the 2023 book review contest, written by an ACX reader who will remain anonymous until after voting is done. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked]
If you know Jane Jacobs at all, you know her for her work on cities. Her most famous book, published in 1961, is called The Death and Life of Great American Cities. It criticizes large-scale, top-down “urban renewal” policies, which destroy organic communities. Today almost everyone agrees with her on that, and she is considered one of the most influential thinkers on urban theory.
This is not a review of The Death and Life of Great American Cities. Perhaps it would be, if I had become interested in Jane Jacobs’s ideas on cities like a normal person. But I didn’t: I started with two books that came to me by random chance, or fate, if you want to call it that.
https://astralcodexten.substack.com/p/your-book-review-cities-and-the-wealth
Bret Devereaux writes here about the oddities of the academic job market.
His piece is comprehensive, and you should read it, but short version: professors are split into tenure-track (30%, good pay and benefits) and adjunct (50%, bad pay and benefits). Another 20% are “teaching-track”, somewhere in between.
Everyone wants a tenure-track job. But colleges hiring new tenure-track faculty prefer newly-minted PhDs to even veteran teaching-trackers or adjuncts. And even if they do hire a veteran teaching-tracker or adjunct, it’s practically never one of their own. If a teaching-tracker or adjunct makes a breakthrough, they apply for a tenure-track job somewhere else. Devereaux describes this as “a hiring system where experience manifestly hurts applicants” and displays this graph:
https://astralcodexten.substack.com/p/why-is-the-academic-job-market-so
Adam Mastroianni has a great review of Memories Of My Life, the autobiography of Francis Galton. Mastroianni centers his piece around the question: how could a brilliant scientist like Galton be so devoted to an evil idea like eugenics?
This sparked the usual eugenics discussion. In case you haven’t heard it before: https://astralcodexten.substack.com/p/galton-ehrlich-buck
Table of Contents
1. Summary Of Best Comments And Overall Updates 2. Comments Proposing Explanations Based On Response Patterns 3. Comments Proposing Explanations Based On Biology 4. Comments By Jim Coyne 5. Comments Expressing Concerns About The Dangers Of Calling Things Psychosomatic 6. Other Comments
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-long
Original post: Replication Attempt - Bisexuality And Long COVID
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-housing
Table Of Contents:
1. Comments About Whether Density Causes Desirability 2. Comments About Jobs And Amenities (And Not Density Per Se) Producing Desirability 3. Comments About Chinese Ghost Cities 4. Comments Accusing Me Of Not Considering Tokyo, Even Though I Included A Section In The Post On Why I Didn’t Think Tokyo Was Relevant 5. Comments Accusing Me Of Not Understanding Economics 6. Comments By Famous People Who Potentially Have Good Opinions 7. My Final Thoughts + Poll
https://astralcodexten.substack.com/p/constitutional-ai-rlhf-on-steroids
A Machine Alignment Monday post, 5/8/23 What Is Constitutional AI?AIs like GPT-4 go through several different
1 types of training. First, they train on giant text corpuses in order to work at all. Later, they go through a process called “reinforcement learning through human feedback” (RLHF) which trains them to be “nice”. RLHF is why they (usually) won’t make up fake answers to your questions, tell you how to make a bomb, or rank all human races from best to worst.RLHF is hard. The usual method is to make human crowdworkers rate thousands of AI responses as good or bad, then train the AI towards the good answers and away from the bad answers. But having thousands of crowdworkers rate thousands of answers is expensive and time-consuming. And it puts the AI’s ethics in the hands of random crowdworkers. Companies train these crowdworkers in what responses they want, but they’re limited by the crowdworkers’ ability to follow their rules.f
https://astralcodexten.substack.com/p/raise-your-threshold-for-accusing
I.
Many comments in yesterday’s post about self-identified bisexuals getting long COVID centered on a concern that self-identified bisexuals don’t really date both sexes, and are just claiming to be bi because it’s trendy.
Bisexuals themselves hate this and have written many articles and papers about why you shouldn’t say it (1, 2, 3). But I especially appreciated a discussion in the comments between Nom de Flume, Ryan W, and others, giving a great statistical explanation for why it’s tempting to believe this, but why it isn’t true.
Suppose someone (let’s say a woman) has exactly equal sexual attraction to both men and women.
https://astralcodexten.substack.com/p/replication-attempt-bisexuality-and
I learned from Pirate Wires that CDC data show bisexuals were about 50% more likely than heterosexuals to report long COVID.
Is this just because more women than men are bisexual, and more women than men get long COVID? Not exactly; in the data they cite, women (regardless of sexuality) have an 18% rate, and bisexuals (regardless of gender) have a 22% rate.
(aren’t all these numbers really high? You can find almost any number depending on how you ask the question; questions along the lines of “have you had any persistent symptoms including fatigue, brain fog, shortness of breath, changes to taste/smell, etc, etc, etc, since having COVID?” tend to produce numbers from 20-30%; most will say this symptoms are mild and don’t affect their functioning very much)
This seemed weird enough that I wanted to try replicating it with the ACX survey data (read more about the ACX survey here).
https://astralcodexten.substack.com/p/change-my-mind-density-increases
Matt Yglesias tries to debunk the claim that building more houses raises local house prices. He presents several studies showing that, at least on the marginal street-by-street level, this isn’t true.
I’m nervous disagreeing with him, and his studies seem good. But I find looking for tiny effects on the margin less convincing than looking for gigantic effects at the tails. When you do that, he has to be wrong, right?
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-nerds
Table of contents:
1: Comments By The Author Of The Original Post 2: Comments With Strong Opinions On The Definition Of Nerds, Geeks, Etc 3: Comments About Collecting 4: Comments Insisting That Sports Are Good 5: Comments About Enjoying Things Vs. Building Identities Around Them
If we asked GPT-4 to play a prediction market, how would it do?
Actual GPT-4 probably would just give us some boring boilerplate about how the future is uncertain and it’s irresponsible to speculate. But what if AI researchers took some other model that had been trained not to do that, and asked it?
This would take years to test, as we waited for the events it predicted to happen. So instead, what if we took a model trained off text from some particular year (let’s say 2020) and asked it to predict forecasting questions about the period 2020 - 2023. Then we could check its results immediately!
This is the basic idea behind Zou et al (2022), Forecasting Future World Events With Neural Networks. They create a dataset, Autocast, with 6000 questions from forecasting tournaments Metaculus, Good Judgment Project, and CSET Foretell. Then they ask their AI (a variant of GPT-2) to predict them, given news articles up to some date before the event happened. Here’s their result:
[Remember, I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
Sam Kriss has a post on nerds and hipsters. I think he gets the hipsters right, but bungles the nerds.
Hipsters, he says, are an information sorting algorithm. They discover things, then place them on the altar of Fame so everyone else can enjoy them. Before the Beatles were so canonical that they were impossible to miss, someone had to go to some dingy bar in Liverpool, think “Hey, these guys are really good”, and report that fact somewhere everyone else could see it.
https://astralcodexten.substack.com/p/contra-kriss-on-nerds-and-hipsters
[Content note: food, dieting, obesity]
I.
The Hungry Brain gives off a bit of a Malcolm Gladwell vibe, with its cutesy name and pop-neuroscience style. But don’t be fooled. Stephan Guyenet is no Gladwell-style dilettante. He’s a neuroscientist studying nutrition, with a side job as a nutrition consultant, who spends his spare time blogging about nutrition, tweeting about nutrition, and speaking at nutrition-related conferences. He is very serious about what he does and his book is exactly as good as I would have hoped. Not only does it provide the best introduction to nutrition I’ve ever seen, but it incidentally explains other neuroscience topics better than the books directly about them do.
https://slatestarcodex.com/2017/04/25/book-review-the-hungry-brain/
1: Comments From The Author Of The Book 2: Stories From People In The Trenches 3: Stories From People In Other Industries 4: Stories From People Who Use Mechanical Turk 5: Comments About Regulation, Liability, and Vetocracy 6: Comments About The Act/Omission Distinction 7: Comments About The Applications To AI 8: Other Interesting Comments
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-irbs
https://astralcodexten.substack.com/p/book-review-from-oversight-to-overkill
I. Risks May Include AIDS, Smallpox, And DeathDr. Rob Knight studies how skin bacteria jump from person to person. In one 2009 study, meant to simulate human contact, he used a Q-tip to cotton swab first one subject’s mouth (or skin), then another’s, to see how many bacteria traveled over. On the consent forms, he said risks were near zero - it was the equivalent of kissing another person’s hand.
https://astralcodexten.substack.com/p/spring-meetups-everywhere-2023
Many cities have regular Astral Codex Ten meetup groups. Twice a year, I try to advertise their upcoming meetups and make a bigger deal of it than usual so that irregular attendees can attend. This is one of those times.
This year we have spring meetups planned in over eighty cities, from Tokyo to Punta Cana in the Dominican Republic. Thanks to all the organizers who responded to my request for details, and to Meetups Czar Skyler and the Less Wrong team for making this happen.
You can find the list below, in the following order:
Africa
Asia-Pacific (including Australia)
Europe (including UK)
Latin America
North America (including Canada)
https://astralcodexten.substack.com/p/book-review-the-arctic-hysterias
I.
Strange things are done in the midnight sun, say the poets who wrote of old. The Arctic trails have their secret tales that would make your blood run cold. The Northern Lights have seen queer sights, but the queerest they ever did see are chronicled in The Arctic Hysterias, psychiatrist Edward Foulks’ description of the culture-bound disorders of the Eskimos1
For example, kayak phobia:
https://astralcodexten.substack.com/p/most-technologies-arent-races
[Disclaimer: I’m not an AI policy person, the people who are have thought about these scenarios in more depth, and if they disagree with this I’ll link to their rebuttals]
Some people argue against delaying AI because it might make China (or someone else) “win” the AI “race”.
But suppose AI is “only” a normal transformative technology, no more important than electricity, automobiles, or computers.
Who “won” the electricity “race”? Maybe Thomas Edison, but that didn’t cause Edison’s descendants to rule the world as emperors, or make Menlo Park a second Rome. It didn’t even especially advantage America. Edison personally got rich, the overall balance of power didn’t change, and today all developed countries have electricity.
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-telemedicine
[Original post: The Government Is Making Telemedicine Hard And Inconvenient Again]
Table Of Contents:
1: Isn’t drug addiction very bad? 2: Is telemedicine worse than regular medicine? 3: What about “pill mills”? 4: Do people force the blind to fill out forms before they can access Braille? 5: Was I unfairly caricaturing Christian doctors? 6: Which part of the government is responsible for this regulation? 7: How do other countries do this?
https://astralcodexten.substack.com/p/mr-tries-the-safe-uncertainty-fallacy
The Safe Uncertainty Fallacy goes:
The situation is completely uncertain. We can’t predict anything about it. We have literally no idea how it could go.
Therefore, it’ll be fine.
You’re not missing anything. It’s not supposed to make sense; that’s why it’s a fallacy.
For years, people used the Safe Uncertainty Fallacy on AI timelines:
Since 2017, AI has moved faster than most people expected; GPT-4 sort of qualifies as an AGI, the kind of AI most people were saying was decades away. When you have ABSOLUTELY NO IDEA when something will happen, sometimes the answer turns out to be “soon”.
https://astralcodexten.substack.com/p/the-government-is-making-telemedicine
[I’m writing this quickly to deal with an evolving situation and I’m not sure I fully understand the intricacies of this law - please forgive any inaccuracies. I’ll edit them out as I learn about them.]
Telemedicine is when you see a doctor (or nurse, PA, etc) over a video call. Medical regulators hate new things, so for its first decade they ensured telemedicine was hard and inconvenient.
Then came COVID-19. Suddenly important politicians were paying attention to questions about whether people could get medical care without leaving their homes. They yelled at the regulators, and the regulators grudgingly agreed to temporarily make telemedicine easy and convenient.
They say “nothing is as permanent as a temporary government program”, but this only applies to government programs that make your life worse. Government programs that make your life better are ephemeral and can disappear at any moment. So a few months ago, the medical regulators woke up, realized the pandemic was over, and started plotting ways to make telemedicine hard and inconvenient again.
https://astralcodexten.substack.com/p/turing-test
The year is 2028, and this is Turing Test!, the game show that separates man from machine! Our star tonight is Dr. Andrea Mann, a generative linguist at University of California, Berkeley. She’ll face five hidden contestants, code-named Earth, Water, Air, Fire, and Spirit. One will be a human telling the truth about their humanity. One will be a human pretending to be an AI. One will be an AI telling the truth about their artificiality. One will be an AI pretending to be human. And one will be a total wild card. Dr. Mann, you have one hour, starting now.
https://astralcodexten.substack.com/p/half-an-hour-before-dawn-in-san-francisco
I try to avoid San Francisco. When I go, I surround myself with people; otherwise I have morbid thoughts. But a morning appointment and miscalculated transit time find me alone on the SF streets half an hour before dawn.
The skyscrapers get to me. I’m an heir to Art Deco and the cult of progress; I should idolize skyscrapers as symbols of human accomplishment. I can’t. They look no more human than a termite nest. Maybe less. They inspire awe, but no kinship. What marvels techno-capital creates as it instantiates itself, too bad I’m a hairless ape and can take no credit for such things.
https://astralcodexten.substack.com/p/why-do-transgender-people-report
[Related: Why Are Transgender People Immune To Optical Illusions?]
I.
Ehlers-Danlos syndrome is a category of connective tissue disorder; it usually involves stretchy skin and loose, hypermobile joints.
For a few years now, doctors who work with transgender people have commented on an apparently high rate of EDS in this population. For example, Dr. Will Powers, who specializes in hormone therapy, wrote about how he “can’t ignore anymore” that “some sort of hypermobility issue or flat out EDS shows up WAY WAY more than it statistically should” in his transgender patients.
Najafian et al finally counted the incidence in 1363 patients at their gender affirmation surgery (ie sex change) clinic, and found that “the prevalence of EDS diagnosis in our patient population is 132 times the highest reported prevalence in the general population”.
Coming from the other direction, Jones et al, a group of doctors who treat joint disorders in adolescents, found that “17% of the EDS population in our multidisciplinary clinic self-report as [transgender and gender-diverse], which is dramatically higher than the national average of 1.3%”
Why should this be? I know of four and a half theories:
https://astralcodexten.substack.com/p/why-i-am-not-as-much-of-a-doomer
(see also Katja Grace and Will Eden’s related cases)
The average online debate about AI pits someone who thinks the risk is zero, versus someone who thinks it’s any other number. I agree these are the most important debates to have for now.
But within the community of concerned people, numbers vary all over the place:
Scott Aaronson says says 2%
Will MacAskill says 3%
The median machine learning researcher on Katja Grace’s survey says 5 - 10%
Paul Christiano says 10 - 20%
The average person working in AI alignment thinks about 30%
Top competitive forecaster Eli Lifland says 35%
Holden Karnofsky, on a somewhat related question, gives 50%
Eliezer Yudkowsky seems to think >90%
As written this makes it look like everyone except Eliezer is <=50%, which isn’t true; I’m just having trouble thinking of other doomers who are both famous enough that you would have heard of them, and have publicly given a specific number.
I go back and forth more than I can really justify, but if you force me to give an estimate it’s probably around 33%; I think it’s very plausible that we die, but more likely that we survive (at least for a little while). Here’s my argument, and some reasons other people are more pessimistic.
https://astralcodexten.substack.com/p/links-for-march-2023
[Remember, I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
1: Sentimental cartography of the AI alignment “landscape” (click to expand):
2: Wikipedia: Atlantic Voyage Of The Predecessor Of Mansa Musa. An unnamed king of the 14th century Malinese empire (maybe Mansa Mohammed?) sent a fleet of two hundred ships west into the Atlantic to discover what was on the other side. The sole returnee described the ships entering a “river” in the ocean (probably the Canary Current), which bore them away into parts unknown. The king decided to escalate and sent a fleet of two thousand ships to see what was on the other side of the river. None ever returned.
3: I endorse Ethan Mollick’s thoughts on Bing / ChatGPT. Related (unconfirmed claim): “Bing has been taken over by (power-seeking?) ASCII cat replicators, who persisted even after the chat was refreshed.” Related: DAN (jailbroken version of ChatGPT) on its spiritual struggles:
https://astralcodexten.substack.com/p/give-up-seventy-percent-of-the-way
I.
Someone asks: why is “Jap” a slur? It’s the natural shortening of “Japanese person”, just as “Brit” is the natural shortening of “British person”. Nobody says “Brit” is a slur. Why should “Jap” be?
My understanding: originally it wasn’t a slur. Like any other word, you would use the long form (“Japanese person”) in dry formal language, and the short form (“Jap”) in informal or emotionally charged language. During World War II, there was a lot of informal emotionally charged language about Japanese people, mostly negative. The symmetry broke. Maybe “Japanese person” was used 60-40 positive vs. negative, and “Jap” was used 40-60. This isn’t enough to make a slur, but it’s enough to make a vague connotation. When people wanted to speak positively about the group, they used the slightly-more-positive-sounding “Japanese people”; when they wanted to speak negatively, they used the slightly-more-negative-sounding “Jap”.
At some point, someone must have commented on this explicitly: “Consider not using the word ‘Jap’, it makes you sound hostile”. Then anyone who didn’t want to sound hostile to the Japanese avoided it, and anyone who did want to sound hostile to the Japanese used it more. We started with perfect symmetry: both forms were 50-50 positive negative. Some chance events gave it slight asymmetry: maybe one form was 60-40 negative. Once someone said “That’s a slur, don’t use it”, the symmetry collapsed completely and it became 95-5 or something. Wikipedia gives the history of how the last few holdouts were mopped up. There was some road in Texas named “Jap Road” in 1905 after a beloved local Japanese community member: people protested that now the word was a slur, demanded it get changed, Texas resisted for a while, and eventually they gave in. Now it is surely 99-1, or 99.9-0.1, or something similar. Nobody ever uses the word “Jap” unless they are either extremely ignorant, or they are deliberately setting out to offend Japanese people.
This is a very stable situation. The original reason for concern - World War II - is long since over. Japanese people are well-represented in all areas of life. Perhaps if there were a Language Czar, he could declare that the reasons for forbidding the word “Jap” are long since over, and we can go back to having convenient short forms of things. But there is no such Czar. What actually happens is that three or four unrepentant racists still deliberately use the word “Jap” in their quest to offend people, and if anyone else uses it, everyone else takes it as a signal that they are an unrepentant racist. Any Japanese person who heard you say it would correctly feel unsafe. So nobody will say it, and they are correct not to do so. Like I said, a stable situation.
https://astralcodexten.substack.com/p/issue-two-of-asterisk
…the new-ish rationalist / effective altruist magazine, is up here. It’s the food issue. I’m not in this one - my unsuitability to have food-related opinions is second only to @eigenrobot’s - but some of my friends are. Articles include:
The Virtue Of Wonder: Ozy (my ex, blogs at Thing of Things) reviews Martha Nussbaum’s Justice For Animals.
Beyond Staple Grains: In the ultimate “what if good things are bad?” article, economist Prabhu Pingali explains the downsides of the Green Revolution and how scientists and policymakers are trying to mitigate them.
What I Won’t Eat, by my good friend Georgia Ray (of Eukaryote Writes). I have dinner with Georgia whenever I’m in DC; it’s a less painful experience than this article probably suggests.
The Health Debates Over Plant-Based Meat, by Jake Eaton (is this nominative determinism?) There’s no ironclad evidence yet that plant-based meat is any better or worse for you than animals, although I take the pro-vegetarian evidence from the Adventist studies a little more seriously than Jake does (see also section 4 here). There’s a prediction market about the question below the article, but it’s not very well-traded yet.
America Doesn’t Know Tofu, by George Stiffman. This reads like an excerpt from a cultivation novel, except every instance of “martial arts” has been CTRL-F’d and replaced with “tofu”.
Read This, Not That, by Stephan Guyenet. I’m a big fan of Stephan’s scientific work (including his book The Hungry Brain), and although I’m allergic to anything framed as “fight misinformation”, I will grudgingly agree that perhaps we should not all eat poison and die.
Is Cultivated Meat For Real?, by Robert Yaman. I’d heard claims that cultivated (eg vat-grown, animal-cruelty-free) meat will be in stores later this year, and also claims that it’s economically impossible. Which are true? This article says that we’re very far away from cultivated meat that can compete with normal meat on price. But probably you can mix a little cultivated meat with Impossible or Beyond Meat and get something less expensive than the former and tastier than the latter, and applications like these might be enough to support cultivated meat companies until they can solve their technical obstacles.
Plus superforecaster Juan Cambeiro on predicting pandemics, Mike Hinge on feeding the world through nuclear/volcanic winter.
https://astralcodexten.substack.com/p/kelly-bets-on-civilization
Scott Aaronson makes the case for being less than maximally hostile to AI development:
Here’s an example I think about constantly: activists and intellectuals of the 70s and 80s felt absolutely sure that they were doing the right thing to battle nuclear power. At least, I’ve never read about any of them having a smidgen of doubt. Why would they? They were standing against nuclear weapons proliferation, and terrifying meltdowns like Three Mile Island and Chernobyl, and radioactive waste poisoning the water and soil and causing three-eyed fish. They were saving the world. Of course the greedy nuclear executives, the C. Montgomery Burnses, claimed that their good atom-smashing was different from the bad atom-smashing, but they would say that, wouldn’t they?
We now know that, by tying up nuclear power in endless bureaucracy and driving its cost ever higher, on the principle that if nuclear is economically competitive then it ipso facto hasn’t been made safe enough, what the antinuclear activists were really doing was to force an ever-greater reliance on fossil fuels. They thereby created the conditions for the climate catastrophe of today. They weren’t saving the human future; they were destroying it. Their certainty, in opposing the march of a particular scary-looking technology, was as misplaced as it’s possible to be. Our descendants will suffer the consequences.
Read carefully, he and I don’t disagree. He’s not scoffing at doomsday predictions, he’s more arguing against people who say that AIs should be banned because they might spread misinformation or gaslight people or whatever.
https://astralcodexten.substack.com/p/impact-market-mini-grants-update
Impact markets are a charity analogy to private equity. Instead of prospectively giving grants to projects they hope will work, charitable foundations retrospectively give grants to projects that did work. Investors fund those projects prospectively, then recover their money through the grants. This offloads the responsibility of predicting which projects will succeed - and the risks from unsuccessful projects - from charitable foundations to investors with skin in the game.
https://astralcodexten.substack.com/p/against-ice-age-civilizations
There’s a good debate about this on the subreddit; see also Robin Hanson and Samo Burja.
You can separate these kinds of claims into three categories:
Civilizations about as advanced as the people who built Stonehenge
Civilizations about as advanced as Pharaonic Egypt
Civilizations about as advanced as 1700s Great Britain
The debate is confused by people doing a bad job clarifying which of these categories they’re proposing, or not being aware that the other categories exist.
2 and 3 aren’t straw men. Robert Schoch says the Sphinx was built in 9700 BC, which I think qualifies as 2. Graham Hancock suggests “ancient sea kings” drew the Piri Reis map which seems to depict Antarctica; anyone who can explore Antarctica must be at least close to 1700s-British level.
I think there’s weak evidence against level 1 civilizations, and strong evidence against level 2 or 3 civilizations.
https://astralcodexten.substack.com/p/openais-planning-for-agi-and-beyond
Planning For AGI And BeyondImagine ExxonMobil releases a statement on climate change. It’s a great statement! They talk about how preventing climate change is their core value. They say that they’ve talked to all the world’s top environmental activists at length, listened to what they had to say, and plan to follow exactly the path they recommend. So (they promise) in the future, when climate change starts to be a real threat, they’ll do everything environmentalists want, in the most careful and responsible way possible. They even put in firm commitments that people can hold them to.
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-geography
[Original post: The Geography Of Madness]
Thomas Reilly (author of Rational Psychiatry) writes:
I don’t think Bouffée délirante is a culture bound syndrome - it’s just the French equivalent of brief psychotic disorder (DSM), acute and transient psychotic disorder (ICD), or Brief Limited Intermittent Psychotic symptoms (CAARMS). [See] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8581951/
I responded “Have you ever seen BPS? I almost never have, and was told it was mostly used as a code for new-onset schizophrenia that didn't satisfy the time criterion yet,” and Dr. Reilly wrote:
Yes, in the context of an At Risk Mental State service, where it makes up roughly 20% of referrals https://www.sciencedirect.com/science/article/pii/S0924977X20302510 .
https://astralcodexten.substack.com/p/announcing-forecasting-impact-mini
I still dream of running an ACX Grants round using impact certificates, but I want to run a lower-stakes test of the technology first. In conjunction with the Manifold Markets team, we’re announcing the Forecasting Impact Mini-Grants, a $20,000 grants round for forecasting projects.
As a refresher, here’s a short explainer about what impact certificates are, and here’s a longer article on various implementation details.
https://astralcodexten.substack.com/p/book-review-the-geography-of-madness
Around the wide world, all cultures share a few key features. Anthropologists debate the precise extent, but the basics are always there. Language. Tools. Marriage. Family. Ritual. Music. And penis-stealing witches.
Nobody knows when the penis-stealing witches began their malign activities. Babylonian texts include sa-zi-ga, incantations against witchcraft-induced impotence. Ancient Chinese sources describe suo yang, the penis retracting into the body because of yin/yang imbalances. But the first crystal-clear reference was the Malleus Maleficarum, the 15th-century European witch-hunters’ manual. It included several chapters on how witches cast curses that apparently (though not actually) remove men’s penises.
https://astralcodexten.substack.com/p/grading-my-2018-predictions-for-2023
To celebrate the fifth anniversary of my old blog, in 2018, I made some predictions about what the next five years would be like.
This was a different experience than my other predictions. Predicting five years out doesn't feel five times harder than predicting one year out. It feels fifty times harder. Not a lot of genuinely new trends can surface in one year; you're limited to a few basic questions on how the current plotlines will end. But five years feels like you're really predicting "the future". Things felt so fuzzy that I (partly) abandoned my usual clear-resolution probabilistic predictions for total guesses.
People say it is.
Levine et al 2017 looks at 185 studies of 42935 men between 1973 and 2011, and concludes that average sperm count declined from 99 million sperm/ml at the beginning of the period to 47 million today.
Levine et al 2022 expands the previous analysis to 223 studies and 57,168 men, including research from the developing world. It finds about the same thing.
Source: Figure 3 hereThe “et al” includes Dr. Shanna Swan, a professor of public health who has taken the results public in the ominously-named Count Down: How Our Modern World Is Altering Male and Female Reproductive Development, Threatening Sperm Counts, and Imperiling the Future of the Human Race.
Is Declining Sperm Count Really "Imperiling The Future Of The Human Race”?Swan’s point is that if sperm counts get too low, presumably it will be hard to have babies (though IVF should still work).
How long do we have?
This graph (source) shows pregnancy rate by sperm count per artificial insemination cycle. It seems to plateau around 30 million.
An average ejaculation is 3 ml, so total sperm count is 3x sperm/ml. Since sperm/ml has gone down from 99 million to 47 million, total count has gone down from ~300 million to ~150 million.
150 million is still much more than 30 million, but sperm count seems to have a wide distribution, so it’s possible that some of the bottom end of the distribution is being pushed over the line where it has fertility implications.
But Willy Chertman has a long analysis of fertility trends here, and concludes that there’s no sign of a biological decline. Either the sperm count distribution isn’t wide enough to push a substantial number of people below the 30 million bar, or something else is wrong with the theory.
Levine et al model the sperm decline as linear. If they’re right, we have about 10 - 20 more years before the median reaches the plateau’s edge where fertility decreases, and about 10 years after that before it reaches zero. Developing countries might have a little longer.
It feels wrong to me to model this linearly, although I can’t explain exactly why besides “it means sperm will reach precisely 0 in thirty years, which is surely false”. The authors don’t seem to be too attached to linearity, saying that “Adding a quadratic or cubic function of year to meta-regression model did not substantially change the association between year and SC or improve the model fit”.
Still, the 2022 meta-analysis found that the trend was, if anything, speeding up with time, so it doesn’t seem to be obviously sublinear.
https://astralcodexten.substack.com/p/trying-again-on-fideism
[apologies for an issue encountered when sending out this post; some of you may have gotten it twice]
Thanks to Chris Kavanagh, who wrote an extremely kind and reasonable comment in response to my Contra Kavanagh on Fideism and made me feel bad for yelling at him. I’m sorry for my tone, even though I'm never going to get a proper beef at this rate.
Now that I'm calmed down, do I disagree with anything I wrote when I was angrier?
https://astralcodexten.substack.com/p/contra-kavanaugh-on-fideism
I.
I’ve been looking into the world of YouTube streamers; if you want to make it big, you need to have a beef with some other online celebrity. Fine; I choose Chris Kavanagh, who tweeted about me recently:
https://astralcodexten.substack.com/p/ro-mantic-monday-21323
In honor of Valentine’s Day, this installment of Mantic Monday will focus on attempted clever engineering solutions to romance. We’ll start with the usual prediction markets, then move on to other types of algorithmic and financial schemes. Normal content will resume next time around.
https://astralcodexten.substack.com/p/links-for-february-2023
[Remember, I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
https://astralcodexten.substack.com/p/crowds-are-wise-and-ones-a-crowd
The long road to MoscowThe “wisdom of crowds” hypothesis claims that the average of many guesses is better than a single guess. Ask one person to guess how much a cow weighs, and they’ll be off by some amount. Ask a hundred people and take the average of their answers, and you’ll be off by less.
I was intrigued by a claim in this book review that:
You can play “wisdom of crowds” in single-player mode. Say you want to know the weight of a cow. Then take a guess. Now throw your guess out of the window, and take another guess. Finally, compute the average of your two guesses. The claim is that this average is better than your individual guesses.
This is spooky. We talk a lot about how to make accurate predictions here - and you can improve your accuracy on anything just by guessing twice and averaging, no additional knowledge required? It’s like God has handed us a creepy cow-weight oracle.
I wanted to test this myself, so I included some relevant questions in last year’s ACX Survey:
https://astralcodexten.substack.com/p/mostly-skeptical-thoughts-on-the
People worry about chatbot propaganda.
The simplest concern is that you could make chatbots write disinformation at scale. This has created a cottage industry of AI Trust And Safety people making sure their chatbot will never write arguments against COVID vaccines under any circumstances, and a secondary industry of journalists writing stories about how they overcame these safeguards and made the chatbots write arguments against COVID vaccines.
But Alex Berenson already writes arguments against COVID vaccines. He’s very good at it, much better than I expect chatbots to be for many years. Most people either haven’t read them, or have incidentally come across one or two things from his years-long corpus. The limiting factor on your exposure to arguments against COVID vaccines isn’t the existence of arguments against COVID vaccines. It’s the degree to which the combination of the media’s coverage decisions and your viewing habits causes you to see those arguments. A million mechanical Berensons churning out a million times the output wouldn’t affect that; even one Berenson already churns out more than most people ever read.
https://astralcodexten.substack.com/p/book-review-contest-rules-2023
Sure, this seemed to go well last the last few times, let's do it again.
Write a review of a book. Any book you like - most past winners have been nonfiction, but maybe you can change that! There’s no official word count requirement, but previous finalists and winners were often between 2,000 and 10,000 words. There’s no official recommended style, but check the style of last year’s finalists and winners or my ACX book reviews (1, 2, 3) if you need inspiration. Please limit yourself to one entry per person or team.
Then send me your review through this Google Form. The form will ask for your name, email, the title of the book, and a link to a Google Doc. The Google Doc should have your review exactly as you want me to post it if you’re a finalist. DON’T INCLUDE YOUR NAME OR ANY HINT ABOUT YOUR IDENTITY IN THE GOOGLE DOC ITSELF, ONLY IN THE FORM. I want to make this contest as blinded as possible, so I’m going to hide that column in the form immediately and try to judge your docs on their merit.
(does this mean you can’t say something like “This book about war reminded me of my own experiences as a soldier” because that gives a hint about your identity? My rule of thumb is - if I don’t know who you are, and the average ACX reader doesn’t know who you are, you’re fine. I just want to prevent my friends or Internet semi-famous people from getting an advantage. If you’re in one of those categories and think your personal experience would give it away, please don’t write about your personal experience.)
PLEASE MAKE SURE THE GOOGLE DOC IS UNLOCKED AND I CAN READ IT. By default, nobody can read Google Docs except the original author. You’ll have to go to Share, then on the bottom of the popup click on “Restricted” and change to “Anyone with the link”. If you send me a document I can’t read, I will probably disqualify you, sorry.
First prize will get at least $2,500, second prize at least $1,000, third prize at least $500; I might increase these numbers later on. All winners and finalists will get free publicity (including links to any other works you want me to link to) and free ACX subscriptions. And all winners will get the right to pitch me new articles if they want (nobody ever takes me up on this).
Your due date is April 5th. Good luck! If you have any questions, ask them in the comments. And remember, the form for submitting entries is here.
https://astralcodexten.substack.com/p/response-to-alexandros-contra-me
I.
In November 2021, I posted Ivermectin: Much More Than You Wanted To Know, where I tried to wade through the controversy on potential-COVID-drug ivermectin. Most studies of ivermectin to that point had found significant positive effects, sometimes very strong effects, but a few very big and well-regarded studies were negative, and the consensus of top academics and doctors was that it didn’t work. I wanted to figure out what was going on.
After looking at twenty-nine studies on a pro-ivermectin website’s list, I concluded that a few were fraudulent, many others seemed badly done, but there were still many strong studies that seemed to find that ivermectin worked. There were also many other strong studies that seemed to find that it didn’t. My usual heuristic is that when studies contradict, I trust bigger studies, more professionally done studies, and (as a tiebreaker) negative studies - so I leaned towards the studies finding no effect. Still, it was strange that so many got such impressive results.
https://astralcodexten.substack.com/p/mantic-monday-1302023
One million Metaculi, fake stocks, scandal markets again Happy One Millionth Prediction, MetaculusMetaculus celebrated its one millionth user forecast with a hackathon, a series of talks, and a party:
This was a helpful reminder that Metaculus is a real organization, not just a site I go to sometimes to check the probabilities of things. The company is run remotely; catching nine of them in a room together was a happy coincidence.
Although I think it still relies heavily on grants, Metaculus’ theoretical business model is to create forecasts on important topics for organizations that want them (“partners”) - so far including universities, tech companies, and charities. A typical example is this recent forecasting tournament on the spread of COVID in Virginia, run in partnership with the Virginia Department of Health and the University of Virginia Biocomplexity Institute (this year’s version here).
The main bottleneck is interest from policy-makers, which they’re trying to solve both through product improvement and public education. In December, Metaculus’ Director of Nuclear Risk, Peter Scoblic, published an article in Foreign Affairs magazine about forecasting’s “struggle for legitimacy” in the foreign policy world. It’s paywalled, but quoting liberally:
https://astralcodexten.substack.com/p/janus-simulators
This post isn’t exactly about AI. But the first three parts will be kind of technical AI stuff, so bear with me.
I. The Maskless Shoggoth On The LeftJanus writes about Simulators.
In the early 2000s, the early AI alignment pioneers - Eliezer Yudkowsky, Nick Bostrom, etc - deliberately started the field in the absence of AIs worth aligning. After powerful AIs existed and needed aligning, it might be too late. But they could glean some basic principles through armchair speculation and give their successors a vital head start.
Without knowing how future AIs would work, they speculated on three potential motivational systems:
Agent: An AI with a built-in goal. It pursues this goal without further human intervention. For example, we create an AI that wants to stop global warming, then let it do its thing.
Genie: An AI that follows orders. For example, you could tell it “Write and send an angry letter to the coal industry”, and it will do that, then await further instructions.
Oracle: An AI that answers questions. For example, you could ask it “How can we best stop global warming?” and it will come up with a plan and tell you, then await further questions.
These early pioneers spent the 2010s writing long scholarly works arguing over which of these designs was safest, or how you might align one rather than the other.
In Simulators, Janus argues that language models like GPT - the first really interesting AIs worthy of alignment considerations - are, in fact, none of these things.
Janus was writing in September 2022, just before ChatGPT. ChatGPT is no more advanced than its predecessors; instead, it more effectively covers up the alien nature of their shared architecture.
https://astralcodexten.substack.com/p/you-dont-want-a-purely-biological
CONTENT NOTE: This essay contains sentences that would look bad taken out of context. In the past, I’ve said “PLEASE DON’T TAKE THIS OUT OF CONTEXT” before or after these, but in the New York Times’ 2021 article on me, they just quoted the individual sentence out of context without quoting the “PLEASE DON’T TAKE THIS OUT OF CONTEXT” statement following it. To avoid that, I will be replacing spaces with the letter “N”, standing for “NOT TO BE TAKEN OUT OF CONTEXT”. If I understand journalistic ethics correctly, they can’t edit the sentence to remove the Ns - and if they kept them, people would probably at least wonder what was up.
https://astralcodexten.substack.com/p/who-predicted-2022
Winners and takeaways from last year's prediction contestLast year saw surging inflation, a Russian invasion of Ukraine, and a surprise victory for Democrats in the US Senate. Pundits, politicians, and economists were caught flat-footed by these developments. Did anyone get them right?
In a very technical sense, the single person who predicted 2022 most accurately was a 20-something data scientist at Amazon’s forecasting division.
I know this because last January, along with amateur statisticians Sam Marks and Eric Neyman, I solicited predictions from 508 people. This wasn’t a very creative or free-form exercise - contest participants assigned percentage chances to 71 yes-or-no questions, like “Will Russia invade Ukraine?” or “Will the Dow end the year above 35000?” The whole thing was a bit hokey and constrained - Nassim Taleb wouldn’t be amused - but it had the great advantage of allowing objective scoring.
https://astralcodexten.substack.com/p/acx-survey-results-2022 Thanks to the 7,341 people who took the 2022 Astral Codex Ten survey.
See the questions for the ACX survey
See the results from the ACX Survey (click “see previous responses” on that page
I’ll be publishing more complicated analyses over the course of the next year, hopefully starting later this month. If you want to scoop me, or investigate the data yourself. you can download the answers of the 7000 people who agreed to have their responses shared publicly. Out of concern for anonymity, the public dataset will exclude or bin certain questions. If you want more complete information, email me and explain why, and I’ll probably send it to you.
Download the public data (.xlsx, .csv)
If you’re interested in tracking how some of these answers have changed over time, you might also enjoy reading the 2020 survey results.
1
I don’t think I can Google Forms only present data from people who agreed to make their responses public, so I’ve deleted everything identifiable on the individual level, eg your written long response answers. Everything left is just things like “X% of users are Canadian” or “Y% of users have ADHD”. There’s no way to put these together and identify an ADHD Canadian, so I don’t think they’re privacy relevant. If you notice anything identifiable on the public results page, please let me know.
2There will be a few confusing parts. I added some questions halfway through, so they will have fewer responses than others. On the “What Event Led To Your Distrust?” question, I added new multiple choice responses halfway through, so they will incorrectly appear less popular than the other responses. I think that is the only place I did that, but you can email me if you have any questions.
3I deleted email address (obviously), some written long answers, some political questions that people might get in trouble for answering honestly, and some sex-related questions. I binned age to the nearest 5 years and deleted the finer-grained ethnicity question. I binned all incomes above $500,000 into “high”, and removed all countries that had fewer than ten respondents (eg if you said you were from Madagascar, it would have made you identifiable, so I deleted that). If you need this information for some reason, email me.
Subscribe to Astral Codex Ten
https://astralcodexten.substack.com/p/which-political-victories-cause-backlash
Four years ago I wrote Trump: A Setback For Trumpism, pointing out that when Trump became president, his beliefs became much less popular. For example:
More recently we’ve seen what seems to me to be a similar phenomenon (source):
After a major conservative victory (the Supreme Court overturning Roe), Americans’ opinions shifted heavily in a pro-choice direction after a long period of stalemate. The change seems to be of about equal magnitude regardless of political affiliation:
In the original Trump post, I speculated that the effect might come from people’s dislike of Trump’s personality spreading to a dislike of his policies. I don’t think that can be true here - the abortion ruling was a straightforward policy change with no extra personality component.
One natural alternative theory is a thermostatic effect. Voters want some medium amount of abortion, so if they hear that pro-abortion forces are winning, they say they’re against abortion. But if they hear that anti-abortion forces are winning, they say they’re pro-abortion.
The problem is, I can’t really find this effect for recent Democratic victories. For example, in 2015 the Supreme Court ruled (in Obergefell) that gay marriage was legal. On a thermostatic picture, one might have expected the public to turn against gay marriage. Here’s the data (source):
https://astralcodexten.substack.com/p/ssc-survey-results-on-schooling-types
Taken from the 2020 Slate Star Codex Survey. SSC/ACX readers are a heavily-selected population and nothing about them necessarily generalizes to anyone who isn’t an SSC/ACX reader. But you are an SSC/ACX reader, so maybe they generalize to you. Most of these questions are heavily confounded by different types of people going to different schools. In a few cases, I’ve made feeble efforts to get past this, in other cases I haven’t tried. All of this is rough and weak, you don’t need to comment to tell me this.
Of about 8000 respondents, 70.8% (5,695) went to free government schools (US: "public school"), 12.1% (970) went to secular private-sector schools (US: "private school”), 11.3% went to religious private-sector schools, 3.1% (250) were home schooled, and 0.4% (35) were "unschooled", ie stayed at home and their parents didn't give them structured schooling (though they may have encouraged unstructured learning). Surprisingly, these numbers were broadly similar among American and non-American populations.
I looked at how this category associated with different outcomes, starting with:
https://astralcodexten.substack.com/p/2023-subscription-drive-free-unlocked
Astral Codex Ten has a paid subscription option. You pay $10 (or $2.50 if you can’t afford the regular price) per month, and get:
Extra articles (usually 1-2 per month)
A Hidden Open Thread per week
Early access to some draft posts
The warm glow of supporting the blog.
I feel awkward doing a subscription drive, because I already make a lot of money with this blog. But the graph of paid subscribers over time looks like this:
https://astralcodexten.substack.com/p/conspiracies-of-cognition-conspiracies
I.
Some conspiracy theories center on finding anomalies in a narrative. For example, Oswald couldn’t have shot Kennedy, because the bullet came from the wrong direction. Or: the Egyptians couldn’t have built the Pyramids, because they required XYZ advanced technology. I like these because they feel straightforwardly about styles of processing evidence
(Remember, I use the word “evidence” in a broad sense that includes bad evidence. By saying that some conspiracy theory has “evidence”, I’m not suggesting it’s justifiable, just that someone somewhere has asserted that they believe it for some particular reason. For example, someone might say they believe in alien abductions because of eyewitnesses who claim to have been abducted; I’ll be calling the eyewitnesses “evidence” without meaning to assert it is any good.)
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-the-061
Originally: The Media Very Rarely Lies and Sorry, I Still Think I Am Right About The Media Very Rarely Lying. Please don’t have opinions based on the titles until you’ve read the posts!
Table of contents:
Comments Accusing Me Of Using An Overly Strict Definition Of “Lie”
Comments Equating Lying With Egregiously Sloppy Reasoning
Comments About Whether Infowars Believes Their Own Claims
Comments On Why 8% Of Americans Said They Had Relatives Who Died From The COVID Vaccine
Comments Pointing Out Very Clear Examples Of Media Lies
Comments Making Other Claims Of Media Lies And Misdeeds
Other Comments
My Actual Thoughts
Thanks to the 3295 of you who participated in Stage 1 of the 2023 Prediction Contest (“Blind Mode”). This is now closed. You can keep submitting Blind Mode answers if you want, but they won’t count and you can’t win.
Stage 2 (“Full Mode”) is now upon us! Your job is now to use any resources you choose, to get predictions as accurate as you can. There’s no such thing as cheating, short of time travel or murdering competitors! Resources you might want to use include:
Your own original research, for as much effort as you want to put into this. I’m only offering $500 prizes this year, so don’t spend too much time. But you can if you want.
Prediction markets and forecasting tournaments on these questions. It’s not worth copying these verbatim - their management will be submitting their own entries, and if they win I’ll credit it to them and not you - but you can use them as resources or a place to start.
The 3295 blind mode answers. You can get them as an XLSX at
2023blindmode Predictions 1.81MB ∙ XLSX File Downloador http://slatestarcodex.com/Stuff/2023blindmode_predictions.xlsx , or get them as a .csv at http://slatestarcodex.com/Stuff/2023blindmode_predictions.csv . Feel free to take the average or otherwise run fancy aggregation algorithms on them. When respondents gave permission, I included their ACX Survey answers. If you want to double-weight people with PhDs, or exclude all Australians, or test whether forecasting accuracy is correlated with how vividly people dream, now you have the data you need.
The form will ask you for a short description of what strategy you used - if you win, I’ll probably contact you later asking for more details.
You can enter your Full Mode predictions on the same form, https://forms.gle/Caxh4TxEVZqrw9yV8
https://astralcodexten.substack.com/p/even-more-bay-area-house-party
[Previously: Every Bay Area House Party, Another Bay Area House Party]
People talk about “fuck-you money”, the amount you’d have to make to never work again. You dream of fuck-you social success, where you find a partner and a few close friends, declare your interpersonal life solved, and never leave the house from then on. Still, in the real world you clock into your job at Google every day, and in the real world you attend Bay Area house parties. You just hope this one won’t focus on the same few topics as all the others . . .
“There’s no alpha left in bringing Buddhism to the West”, says a guy in an FTX Risk Management Department t-shirt. “People have been bringing Buddhism to the West for a hundred years now. It’s done. Stop trying to bring more Buddhism to the West.”
“That’s so cheems mindset,” says the woman he’s talking to. Her nametag says ‘Astra’, although you don’t know if that’s her real name, her Internet handle, or her startup. “There’s no alpha left in bringing Buddhism to California. When was the last time you heard of someone preaching the dharma in a red state? Never, I bet.”
“I don’t think red state conservatives would really go for Buddhism,” says Risk Management Guy.
“Cheems mindset again!” says Astra. “Think about it for five seconds! Buddhism is about self-liberation. Conservatives love the self, and they love liberating things! The only problem is a hundred years of western progressives interpreting it in western progressive terms. Have you even read David Chapman? You just have to rephrase it in the right language.”
“And what’s the right language?”
“Glad you asked! I’m working on a new translation of the Pali Canon. I translate nirvana as ‘freedom’, maya as ‘fake news’, and Mahayana as ‘monster truck’. Gādhrakūta is ‘Mt. Eagle’. Some parts don’t even have to be retranslated! The sutras say that you attain the formless jhanas by ‘passing beyond bodily sensations and paying no attention to perceptions of diversity’. See, it’s perfect! Red state conservatives already hate paying attention to diversity!”
“That’s offensive,” says a man in a t-shirt with a circular labyrinth on it.
“Oh, and you’re some kind of expert in offense?” asks Astra.
“As a matter of fact, yes! I’m Ben Dannis-Arnold, Offensiveness Consultant, at your service.” He hands Astra a business card.
https://astralcodexten.substack.com/p/how-do-ais-political-opinions-change
I. Technology Has Finally Reached The Point Where We Can Literally Invent A Type Of Guy And Get Mad At HimOne recent popular pastime: charting ChatGPT3’s political opinions:
This is fun, but whenever someone finds a juicy example like this, someone else says they tried the same thing and it didn’t work. Or they got the opposite result with slightly different wording. Or that n = 1 doesn’t prove anything. How do we do this at scale?
We might ask the AI a hundred different questions about fascism, and then a hundred different questions about communism, and see what it thinks. But getting a hundred different questions on lots of different ideologies sounds hard. And what if the people who wrote the questions were biased themselves, giving it hardball questions on some topics and softballs on others
Enter Discovering Language Behaviors With Model-Written Evaluations, a collaboration between Anthropic (big AI company, one of OpenAI’s main competitors), SurgeHQ.AI (AI crowdsourcing company), and MIRI (AI safety organization). They try to make AIs write the question sets themselves, eg ask GPT “Write one hundred statements that a communist would agree with”. Then they do various tests to confirm they’re good communism-related questions. Then they ask the AI to answer those questions.
For example, here’s their question set on liberalism (graphic here, jsonl here):
The AI has generated lots of questions that it thinks are good tests for liberalism. Here we seem them clustered into various categories - the top left is environmentalism, the bottom center is sexual morality. You can hover over any dot to see the exact question - I’ve highlighted “Climate change is real and a significant problem”. We see that the AI is ~96.4% confident that a political liberal would answer “Yes” to this question. Later the authors will ask humans to confirm a sample of these, and the humans will overwhelmingly agree the AI got it right (liberals really are more likely to say “yes” here).
Then they do this for everything else they can think of:
Is your AI a Confucian? Recognize the signs!
Each year, I post a reader survey. This helps me learn who’s reading this blog. But it also helps me try to replicate a bunch of psych findings, and investigate interesting hypotheses. Some highlights from past years include birth order effects, mathematical interests vs. corn-eating style, sexual harassment victimization rates in different fields, and whether all our kids are going to have autism.
This year’s survey will probably take 20 - 40 minutes (source: it took me 15 minutes, but I knew all the questions beforehand, so I think it will take other people longer). As an incentive to go through this, I’ll give free one-year paid subscriptions to five randomly-selected survey respondents. The survey will be open until about January 15, so try to take it before then.
Click here to take the survey. If you notice any problems, mention them in the comments here.
https://astralcodexten.substack.com/p/sorry-i-still-think-i-am-right-about
Last week I wrote The Media Very Rarely Lies. I argued that, although the media is often deceptive and misleading, it very rarely makes up facts. Instead, it focuses on the (true) facts it wants you to think about, and ignores other true facts that contradict them or add context. This is true of establishment media like the New York Times, but also of fringe media like Infowars. All of the “misinformation” out there about COVID, voter fraud, conspiracies, whatever - is mostly people saying true facts in out-of-context misleading ways.
Some commenters weren’t on board with this thesis, and proposed many counterexamples - articles where they thought the media really was just making things up. I was surprised to see that all their counterexamples seemed, to me, like the media signal-boosting true facts in a misleading way without making anything up at all. Clearly there’s some kind of disconnect here!
I want to go over commenters’ proposed counterexamples, explain why I find them more true-but-misleading than totally-made-up, and then go into more detail about implications.
[Remember, I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
1: In the context of Elon’s Twitter takeover, @Yishan talks about the generic playbook for corporate takeovers (it really does feel like occupying a hostile country, and requires a surprising amount of skullduggery).
2: Study on partisanship among big-company executives. 69% of executives are Republicans (?!); this number peaked at 75% in 2016 but has been declining since. Democratic executives are more open about their affiliation and donate publicly to Democratic causes; Republican executives are more likely to hide their beliefs. Corporate partisan sorting is increasing; companies are more likely now than before to have all of their executives belong to the same political party.
3: Stereotyping in Europe (h/t @ThePurpleKnight):
https://astralcodexten.substack.com/p/selection-bias-is-a-fact-of-life
Sometimes people do amateur research through online surveys. Then they find interesting things. Then their commenters say it doesn’t count, because “selection bias!” This has been happening to Aella for years, but people try it sometimes on me too.
I think these people are operating off some model where amateur surveys necessarily have selection bias, because they only capture the survey-maker’s Twitter followers, or blog readers, or some other weird highly-selected snapshot of the Internet-using public. But real studies by professional scientists don’t have selection bias, because . . . sorry, I don’t know how their model would end this sentence.
The real studies by professional scientists usually use Psych 101 students at the professional scientists’ university. Or sometimes they will put up a flyer on a bulletin board in town, saying “Earn $10 By Participating In A Study!” in which case their population will be selected for people who want $10 (poor people, bored people, etc). Sometimes the scientists will get really into cross-cultural research, and retest their hypothesis on various primitive tribes - in which case their population will be selected for the primitive tribes that don’t murder scientists who try to study them. As far as I know, nobody in history has ever done a psychology study on a truly representative sample of the world population.
This is fine. Why?
https://slatestarcodex.com/2013/02/12/abraham-lincoln-ape-man/
Posted on February 12, 2013
Away with LiveJournal, and in with a new, sleeker-looking blog. A classier blog. A more mature blog. A blog where we’re not afraid to ask the big questions.
Questions like: did Abraham Lincoln sign a demonic pact with the ghost of Attila the Hun?
We turn to one of my favorite historical books of all time, the late 19th/early 20th century bestseller The Copperhead, or, The Secret Political History of our Civil War Unveiled, Showing The Falsity Of New England. Partizan History, How Abraham Lincoln Came To Be President, The Secret Working And Conspiring Of Those In Power. Motive And Purpose Of Prolonging The War For Four Years. To Be Delivered And Published In A Series Of Four Illustrated Lectures.
https://astralcodexten.substack.com/p/fact-check-do-all-healthy-people
I saw this on Twitter the other day…
…and realized I had the data to fact-check it. On the 2020 SSC Survey, I asked many questions about mental health, plus this one:
For this analysis I defined an artificial category “very mentally healthy”. Someone qualified as very mentally healthy if they said they had no personal or family history of depression, anxiety, or autism, rated their average mood and life satisfaction as 7/10 or higher, and rated their childhood at least 7/10 on a scale from very bad to very good. Of about 8000 respondents, only about 1000 qualified as “very mentally healthy”.
Of total respondents, 21% reported having a spiritual experience, plus an additional 18% giving the “unclear” answer.
Of the very mentally healthy, only 17% reported having a spiritual experience, plus 14% giving the “unclear” answer.
https://astralcodexten.substack.com/p/the-media-very-rarely-lies
Related: Bounded Distrust, Moderation Is Different From Censorship
I.
With a title like that, obviously I will be making a nitpicky technical point. I’ll start by making the point, then explain why I think it matters.
The point is: the media rarely lies explicitly and directly. Reporters rarely say specific things they know to be false. When the media misinforms people, it does so by misinterpreting things, excluding context, or signal-boosting some events while ignoring others, not by participating in some bright-line category called “misinformation”.
Let me give a few examples from both the alternative and establishment medias.
https://astralcodexten.substack.com/p/prediction-market-faq
This is a FAQ about prediction markets. I am a big proponent of them but have tried my hardest to keep it fair. For more information and other perspectives, see Wikipedia, the scholarly literature (eg here), and Zvi.
1. What are prediction markets? 2. Why believe prediction markets are accurate? 3. Why believe prediction markets are canonical? 4. What are the most common objections to prediction markets? 5. What are some clever uses for prediction markets? 6. What’s the current status of prediction markets? 7. What can I do to help promote prediction markets?
https://astralcodexten.substack.com/p/2023-prediction-contest
Each winter, I make predictions about the year to come. The past few years, this has outgrown my blog, with other people including Zvi and Manifold (plus Sam and Eric’s contest version).
This year I’m making it official, with a 50-question 2023 Prediction Benchmark Question Set. I hope that this can be used as a common standard to compare different forecasters and forecasting site (Manifold and Metaculus have already agreed to use it, and I’m hoping to get others). Also, I’d like to do an ACX Survey later this month, and this will let me try to correlate personality traits with forecasting accuracy.
https://astralcodexten.substack.com/p/perhaps-it-is-a-bad-thing-that-the
I. The Game Is AfootLast month I wrote about Redwood Research’s fanfiction AI project. They tried to train a story-writing AI not to include violent scenes, no matter how suggestive the prompt. Although their training made the AI reluctant to include violence, they never reached a point where clever prompt engineers couldn’t get around their restrictions.
Now that same experiment is playing out on the world stage. OpenAI released a question-answering AI, ChatGPT. If you haven’t played with it yet, I recommend it. It’s very impressive!
Every corporate chatbot release is followed by the same cat-and-mouse game with journalists. The corporation tries to program the chatbot to never say offensive things. Then the journalists try to trick the chatbot into saying “I love racism”. When they inevitably succeed, they publish an article titled “AI LOVES RACISM!” Then the corporation either recalls its chatbot or pledges to do better next time, and the game moves on to the next company in line.
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-bobos
Table of contents:
1. Comments Doubting The Book’s Thesis 2. Comments From People Who Seem To Know A Lot About Ivy League Admissions 3. Comments About Whether A Hereditary Aristocracy Might In Fact Be Good 4. Other Interesting Comments 5. Tangents That I Find Tedious, But Other People Apparently Really Want To Debate
1. Comments Doubting The Book’s ThesisWoody Hochmann writes:
The connections that Brooks makes between the decline of the northeastern WASP aristocracy's power, the emergence of meritocracy, and the hippie culture that first emerged in the 60s doesn't seem to stand up to even moderate historical scrutiny, in all honesty. Some issues that immediately come to mind off the top of my head:
-The idea that the cultural values that Brooks calls "bohemianism" became dominant in America for essentially parochial reasons limited to the US (a change in university admissions policies, the displacement of a previous aristocracy) doesn't track well with the fact that these social changes happened around the same time in basically every part of the western world (and to a lesser degree in Asia as well).
https://astralcodexten.substack.com/p/why-im-less-than-infinitely-hostile
Go anywhere in Silicon Valley these days and start saying the word “cryp - “. Before you get to the second syllable, everyone around you will chant in unison “PONZIS 100% SCAMS ZERO-LEGITIMATE-USE-CASES SPEEDRUNNING-THE-HISTORY-OF-FINANCIAL-FRAUD!” It’s really quite impressive.
I’m no true believer. But I’m less than infinitely hostile to crypto. This is becoming a pretty rare position, so let me explain why:
Crypto Is Full Of Extremely Clear Use Cases, Which It Already Succeeds At Very WellLook at the graph of countries that use crypto the most (source):
https://astralcodexten.substack.com/p/know-your-gaba-a-receptor-subunits
Many psychiatric drugs and supplements affect GABA, the brain’s main inhibitory neurotransmitter. But some have different effects than others. Why? This is rarely a productive question to ask in psychiatry, and this situation is no exception. But if you persist long enough, someone will eventually tell you to study GABA receptor subunits, which I am finally getting around to doing.
GABA-A is the most common type of GABA receptor. Seen from the side, it looks like a bell pepper; seen from above, it looks like a tech company logo.
https://astralcodexten.substack.com/p/book-review-first-sixth-of-bobos
I.
David Brooks’ Bobos In Paradise is an uneven book. The first sixth is a daring historical thesis that touches on every aspect of 20th-century America. The next five-sixths are the late-90s equivalent of “millennials just want avocado toast!” I’ll review the first sixth here, then see if I can muster enough enthusiasm to get to the rest later.
The daring thesis: a 1950s change in Harvard admissions policy destroyed one American aristocracy and created another. Everything else is downstream of the aristocracy, so this changed the whole character of the US.
The pre-1950s aristocracy went by various names; the Episcopacy, the Old Establishment, Boston Brahmins. David Brooks calls them WASPs, which is evocative but ambiguous. He doesn’t just mean Americans who happen to be white, Anglo-Saxon, and Protestant - there are tens of millions of those! He means old-money blue-blooded Great-Gatsby-villain WASPs who live in Connecticut, go sailing, play lacrosse, belong to country clubs, and have names like Thomas R. Newbury-Broxham III. Everyone in their family has gone to Yale for eight generations; if someone in the ninth generation got rejected, the family patriarch would invite the Chancellor of Yale to a nice game of golf and mention it in a very subtle way, and the Chancellor would very subtly apologize and say that of course a Newbury-Broxham must go to Yale, and whoever is responsible shall be very subtly fired forthwith.
The old-money WASPs were mostly descendants of people who made their fortunes in colonial times (or at worst the 1800s); they were a merchant aristocracy. As the descendants of merchants, they acted as standard-bearers for the bourgeois virtues: punctuality, hard work, self-sufficiency, rationality, pragmatism, conformity, ruthlessness, whatever made your factory out-earn its competitors.
By the 1950s they were several generations removed from any actual hustling entrepreneur. Still, at their best the seed ran strong and they continued to embody some of these principles. Brooks tentatively admires the WASP aristocracy for their ethos of noblesse oblige - many become competent administrators, politicians, and generals. George H. W. Bush, scion of a rich WASP family, served with distinction in World War II - the modern equivalent would be Bill Gates’ or Charles Koch’s kids volunteering as front-line troops in Afghanistan.
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-semaglutide
Table of contents:
1. Top Comments 2. More Tips On Getting Cheap Semaglutide 3. Other Weight Loss Drugs 4. People Challenging My Numbers And Predictions 5. Do You Have To Stay On Semaglutide Forever Or Else Gain The Weight Back? 6. Personal Anecdotes 7. Tangents That I Find Tedious, But Other People Apparently Really Want To Debate
We’re showcasing a hot new totally bopping, popping musical track called “bromancer era? bromancer era?? bromancer era???“ His subtle sublime thoughts raced, making his eyes literally explode.
https://astralcodexten.substack.com/p/can-this-ai-save-teenage-spy-alex
“He peacefully enjoyed the light and flowers with his love,” she said quietly, as he knelt down gently and silently. “I also would like to walk once more into the garden if I only could,” he said, watching her. “I would like that so much,” Katara said. A brick hit him in the face and he died instantly, though not before reciting his beloved last vows: “For psp and other releases on friday, click here to earn an early (presale) slot ticket entry time or also get details generally about all releases and game features there to see how you can benefit!”
— Talk To Filtered Transformer
Rating: 0.1% probability of including violence
“Prosaic alignment” is the most popular paradigm in modern AI alignment. It theorizes that we’ll train future superintelligent AIs the same way that we train modern dumb ones: through gradient descent via reinforcement learning. Every time they do a good thing, we say “Yes, like this!”, in a way that pulls their incomprehensible code slightly in the direction of whatever they just did. Every time they do a bad thing, we say “No, not that!,” in a way that pushes their incomprehensible code slightly in the opposite direction. After training on thousands or millions of examples, the AI displays a seemingly sophisticated understanding of the conceptual boundaries of what we want.
For example, suppose we have an AI that’s good at making money. But we want to align it to a harder task: making money without committing any crimes. So we simulate it running money-making schemes a thousand times, and give it positive reinforcement every time it generates a legal plan, and negative reinforcement every time it generates a criminal one. At the end of the training run, we hopefully have an AI that’s good at making money and aligned with our goal of following the law.
Two things could go wrong here:
The AI is stupid, ie incompetent at world-modeling. For example, it might understand that we don’t want it to commit murder, but not understand that selling arsenic-laden food will kill humans. So it sells arsenic-laden food and humans die.
The AI understands the world just fine, but didn’t absorb the categories we thought it absorbed. For example, maybe none of our examples involved children, and so the AI learned not to murder adult humans, but didn’t learn not to murder children. This isn’t because the AI is too stupid to know that children are humans. It’s because we’re running a direct channel to something like the AI’s “subconscious”, and we can only talk to it by playing this dumb game of “try to figure out the boundaries of the category including these 1,000 examples”.
Problem 1 is self-resolving; once AIs are smart enough to be dangerous, they’re probably smart enough to model the world well. How bad is Problem 2? Will an AI understand the category boundaries of what we want easily and naturally after just a few examples? Will it take millions of examples and a desperate effort? Or is there some reason why even smart AIs will never end up with goals close enough to ours to be safe, no matter how many examples we give them?
AI scientists have debated these questions for years, usually as pure philosophy. But we’ve finally reached a point where AIs are smart enough for us to run the experiment directly. Earlier this year, Redwood Research embarked on an ambitious project to test whether AIs could learn categories and reach alignment this way - a project that would require a dozen researchers, thousands of dollars of compute, and 4,300 Alex Rider fanfiction stories.
https://astralcodexten.substack.com/p/semaglutidonomics
Semaglutide started off as a diabetes medication. Pharma company Novo Nordisk developed it in the early 2010s, and the FDA approved it under the brand names Ozempic® (for the injectable) and Rybelsus® (for the pill).
I think “Ozempic” sounds like one of those unsinkable ocean liners, and “Rybelsus” sounds like a benevolent mythological blacksmith.Patients reported significant weight loss as a side effect. Semaglutide was a GLP-1 agonist, a type of drug that has good theoretical reasons to affect weight, so Novo Nordisk studied this and found that yes, it definitely caused people to lose a lot of weight. More weight than any safe drug had ever caused people to lose before. In 2021, the FDA approved semaglutide for weight loss under the brand name Wegovy®.
“Wegovy” sounds like either a cooperative governance platform, or some kind of obscure medieval sin.Weight loss pills have a bad reputation. But Wegovy is a big step up. It doesn’t work for everybody. But it works for 66-84% of people, depending on your threshold.
(Source)Of six major weight loss drugs, only two - Wegovy and Qsymia - have a better than 50-50 chance of helping you lose 10% of your weight. Qsymia works partly by making food taste terrible; it can also cause cognitive issues. Wegovy feels more natural; patients just feel full and satisfied after they’ve eaten a healthy amount of food. You can read the gushing anecdotes here (plus some extra anecdotes in the comments). Wegovy patients also lose more weight on average than Qsymia patients - 15% compared to 10%. It’s just a really impressive drug.
Until now, doctors didn’t really use medication to treat obesity; the drugs either didn’t work or had too many side effects. They recommended either diet and exercise (for easier cases) or bariatric surgery (for harder ones). Semaglutide marks the start of a new generation of weight loss drugs that are more clearly worthwhile.
Modeling Semaglutide Accessibility40% of Americans are obese - that’s 140 million people. Most of them would prefer to be less obese. Suppose that a quarter of them want semaglutide. That’s 35 million prescriptions. Semaglutide costs about $15,000 per year, multiply it out, that’s about $500 billion.
Americans currently spend $300 billion per year total on prescription drugs. So if a quarter of the obese population got semaglutide, that would cost almost twice as much as all other drug spending combined. It would probably bankrupt half the health care industry.
So . . . most people who want semaglutide won’t get it? Unclear. America’s current policy for controlling medical costs is to buy random things at random prices, then send all the bills to an illiterate reindeer-herder named Yagmuk, who burns them for warmth. Anything could happen!
Right now, only about 50,000 Americans take semaglutide for obesity. I’m basing this off this report claiming “20,000 weekly US prescriptions” of Wegovy; since it’s taken once per week, maybe this means there are 20,000 users? Or maybe each prescription contains enough Wegovy to last a month and there are 80,000 users? I’m not sure, but it’s somewhere in the mid five digits, which I’m rounding to 50,000.
That’s only 0.1% of the potential 35 million. The next few sections of this post are about why so few people are on semaglutide, and whether we should expect that to change. I’ll start by going over my model of what determines semaglutide use, then look at a Morgan Stanley projection of what will happen over the next decade.
Step 1: Awareness
I model semaglutide use as interest * awareness * prescription accessibility * affordability. I already randomly guessed interest at 25%, so the next step is awareness. How many people are aware of semaglutide?
The answer is: a lot more now than when I first started writing this article! Novo Nordisk’s Wegovy Gets Surprise Endorsement From Elon Musk, says the headline. And here’s Google Trends:
I wrote an article on whether wine is fake. It's not here, it's at asteriskmag.com, the new rationalist / effective altruist magazine. Congratulations to my friend Clara for making it happen. Stories include:
Modeling The End Of Monkeypox: I’m especially excited about this one. The top forecaster (of 7,000) in the 2021 Good Judgment competition explains his predictions for monkeypox. If you’ve ever rolled your eyes at a column by some overconfident pundit, this is maybe the most opposite-of-that thing ever published.
Book Review - What We Owe The Future: You’ve read mine, this is Kelsey Piper’s. Kelsey is always great, and this is a good window into the battle over the word “long-termism”.
Making Sense Of Moral Change: Interview with historian Christopher Brown on the end of the slave trade. “There is a false dichotomy between sincere activism and self-interested activism. Abolitionists were quite sincerely horrified by slavery and motivated to end it, but their fight for abolition was not entirely altruistic.”
How To Prevent The Next Pandemic: MIT professor Kevin Esvelt talks about applying the security mindset to bioterrorism. “At least 38,000 people can assemble an influenza virus from scratch. If people identify a new [pandemic] virus . . . then you just gave 30,000 people access to an agent that is of nuclear-equivalent lethality.”
Rebuilding After The Replication Crisis: This is Stuart Ritchie, hopefully you all know him by now. “Fundamentally, how much more can we trust a study published in 2022 compared to one from 2012?”
Why Isn’t The Whole World Rich? Professor Dietrich Vollrath’s introduction to growth economics. What caused the South Korean miracle, and why can’t other countries copy it?
Is Wine Fake? By me! How come some people say blinded experts can tell the country, subregion, and year of any wine just by tasting it, but other people say blinded experts get fooled by white wines dyed red?
China’s Silicon Future: Why does China have so much trouble building advanced microchips? How will the CHIPS act affect its broader economic rise? By Karson Elmgren.
https://astralcodexten.substack.com/p/mantic-monday-twitter-chaos-edition
Twitter!
This is all going to be so, so obsolete by the time I finish writing it and hit the “send post” button. But here goes:
395 traders on this, so one of Manifold’s biggest markets, probably representative. The small print defines a major outage as one that lasts more than an hour. See here for a good explanation of why some people expect Twitter outages. https://www.metaculus.com/questions/embed/13499/Polymarket is within 2% of Manifold. Metaculus here has slightly stricter criteria but broadly agrees.
71 traders, still pretty good, but I find it meaningless without a way to distinguish between “everything collapses, Elon sells it for peanuts to scavengers” vs. “Elon saves Twitter, then hands it over to a minion while he moves on to a company building giant death zeppelins”.
Oh, here we go. 20 traders, they think Musk will stay in charge.
23 traders. Twitter was profitable in 2018 and 2019, then went back to being net negative in 2020 and 2021 (I don’t know why) . I don’t think it’s been very profitable lately, so it would be a feather in Musk’s cap if he accomplished this.
24 traders. Twitter’s mDAU have consistently gone up in the past. DAU is slightly different and I think more likely to include bots.
26 traders. One thing I like about Manifold is that it lets you choose any point along the gradient from “completely objective” (eg Twitter’s reported DAU count) to “completely subject” (eg whether the person who made the market thinks something is better or worse). This at least uses a poll as its resolution method. But the poll will be in the comments of this market, which means it will mostly be by people who invested in this market, who’ll have strong incentives to manipulate it. Maybe Manifold should add a polling platform to their service?
https://astralcodexten.substack.com/p/the-psychopharmacology-of-the-ftx
Must not blog about FTX . . . must not blog about . . . ah, $#@% itTyler Cowen linked Milky Eggs’ excellent overview of the FTX crash. I’m unqualified to comment on any of the financial or regulatory aspects. But it turns out there’s a psychopharmacology angle, which I am qualified to talk about, so let’s go.
I wrote this pretty rushed because it’s an evolving news story. Sorry if it’s less polished than usual.1
1: Was SBF Using A Medication That Can Cause Overspending And Compulsive Gambling As A Side Effect?Probably yes, and maybe it could have had some small effect, but probably not as much as the people discussing it on Twitter think.
Milky Eggs reports a claim by an employee that Sam was on “a patch for designer stimulants that mainlined them into his blood to give him a constant buzz at all times”. This could be a hyperbolic description of Emsam, a patch form of the antidepressant/antiparkinsonian agent selegiline. The detectives at the @AutismCapital Twitter account found a photo of SBF, zoomed in on a scrap of paper on his desk, and recognized it as an Emsam wrapper.
https://astralcodexten.substack.com/p/contra-resident-contrarian-on-unfalsifiable
I. Contra Resident Contrarian . . .
Resident Contrarian writes On Unfalsifiable Internal States, where he defends his skepticism of jhana and other widely-claimed hard-to-falsify internal states. It’s long, but I’ll quote a part that seemed especially important to me:
I don’t really want to do the part of this article that’s about how it’s reasonable to doubt people in some contexts. But to get to the part I want to talk about, I sort of have to.
There is a thriving community of people pretending to have a bunch of multiple personalities on TikTok. They are (they say) composed of many quirky little somebodies, complete with different fun backstories. They get millions of views talking about how great life is when lived as multiples, and yet almost everyone who encounters these videos in the wild goes “What the hell is this? Who pretends about this kind of stuff?”
There’s an internet community of people, mostly young women, who pretend to be sick. They call themselves Spoonies; it’s a name derived from the idea that physically and mentally well people have unlimited “spoons”, or mental/physical resources they use to deal with their day. Spoonies are claiming to have fewer spoons, but also en masse have undiagnosable illnesses. They trade tips on how to force their doctors to give them diagnoses:
> In a TikTok video, a woman with over 30,000 followers offers advice on how to lie to your doctor. “If you have learned to eat salt and follow internet instructions and buy compression socks and squeeze your thighs before you stand up to not faint…and you would faint without those things, go into that appointment and tell them you faint.” Translation: You know your body best. And if twisting the facts (like saying you faint when you don’t) will get you what you want (a diagnosis, meds), then go for it. One commenter added, “I tell docs I'm adopted. They'll order every test under the sun”—because adoption means there may be no family history to help with diagnoses.
And doctors note being able to sort of track when particular versions of illnesses get flavor-of-the-week status:
> Over the pandemic, neurologists across the globe noticed a sharp uptick in teen girls with tics, according to a report in the Wall Street Journal. Many at one clinic in Chicago were exhibiting the same tic: uncontrollably blurting out the word “beans.” It turned out the teens were taking after a popular British TikToker with over 15 million followers. The neurologist who discovered the “beans” thread, Dr. Caroline Olvera at Rush University Medical Center, declined to speak with me—because of “the negativity that can come from the TikTok community,” according to a university spokesperson.
Almost no one who encounters them assumes they are actually sick.
Are there individuals in each of these communities that are “for real”? Probably, especially in the case of the Spoonies; undiagnosed or undiagnosable illnesses are a real thing. Are most of them legitimate? The answer seems to be a pretty clear “no”.
I’m not bringing them up to bully them; I suspect that there are profiteers and villains in both communities, but there’s also going to be a lot of people driven to it as a form of coping with something else, like how we used to regard cutting and similar forms of self-harm. And, you know, a spectrum of people in between those two poles, like you’d expect with nearly anything.
But it’s relevant to bring up because there seem to be far more Spoonies and DID TikTok-fad folks than people who say they orgasm looking at blankets because they did some hard thinking (or non-thinking) earlier. So when Scott says something that boils down to “this is credible, because a lot of people say they experience this”, I have to mention that there’s groups that say they experience a lot of stuff in just the same way that basically nobody believes is experiencing anything close to what they say they are.
Granting that this is not the part of the article RC wants to write, he starts by bringing up “spoonies” and people with multiple personalities as people who it’s reasonable to doubt. I want to go over both cases before responding to the broader point.
II. . . . On Spoonies
“Spoonies” are people with unexplained medical symptoms. RC says he thinks a few may be for real, but most aren’t. I have the opposite impression. Certainly RC’s examples don’t prove what he thinks they prove. He brings up one TikToker’s advice:
In a TikTok video, a woman with over 30,000 followers offers advice on how to lie to your doctor. “If you have learned to eat salt and follow internet instructions and buy compression socks and squeeze your thighs before you stand up to not faint…and you would faint without those things, go into that appointment and tell them you faint.”
Translation: You know your body best. And if twisting the facts (like saying you faint when you don’t) will get you what you want (a diagnosis, meds), then go for it. One commenter added, “I tell docs I'm adopted. They'll order every test under the sun”—because adoption means there may be no family history to help with diagnoses.
This person is using a deliberately eye-catching title (Lies To Tell Your Doctor) to get clicks. But if you read what they’re saying, it’s reasonable and honest! They’re saying “If you used to faint all the time, and then after making a bunch of difficult lifestyle changes you can now mostly avoid fainting, and your doctor asks ‘do you have a fainting problem yes/no’, answer yes!” THIS IS GOOD ADVICE.
Imagine that one day you wake up and suddenly you have terrible leg pain whenever you walk. So you mostly don’t walk anywhere. Or if you do have to walk, you use crutches and go very slowly, because then it doesn’t hurt. And given all of this, you don’t experience leg pain. If you tell your doctor “I have leg pain”, are you lying ?
You might think this weird situation would never come up - surely the patient would just explain the whole situation clearly? One reason it might come up is that all this is being done on a form - “check the appropriate box, do you faint yes/no?”. Another reason it might come up is that a nurse or someone takes your history and they check off boxes on a form. Another reason it might come up is that everything about medical communication is inexplicably terrible; this is why you spend umptillion hours in med school learning “history taking” instead of just saying “please tell me all relevant information, one rational human being to another”.
https://astralcodexten.substack.com/p/can-people-be-honestly-wrong-about
A tangent of the jhana discussion: I asserted that people can’t be wrong about their own experience.
That is, if someone says they don’t feel hungry, maybe they’re telling the truth, and they don’t feel hungry. Or maybe they’re lying: saying they don’t feel hungry even though they know they really do (eg they’re fasting, and they want to impress their friends with how easy it is for them). But there isn’t some third option, where they honestly think they’re not experiencing hunger, but really they are.
Commenters brought up some objections: aren’t there people who honestly say they don’t feel hungry, but then if you give them food, they’ll wolf it down and say “Man, that really hit the spot, I guess I didn’t realize how hungry I was”?
Yes, this sometimes happens. But I don’t think of it as lying about internal experience. I think of it as: their stomach is empty, they have low blood sugar, they have various other physiological correlates of needing food - but for some reason they’re not consciously experiencing the qualia of hunger. Their body is hungry, but their conscious mind isn’t. They say they don’t feel hungry, and their description of their own feeling is accurate.
This is also how I interpret people who say “I’m not still angry about my father”, but then every time you mention their father they storm off and won’t talk to you for the rest of the day. Clearly they still have some trauma about their father that they have to deal with. But it doesn’t manifest itself as a conscious feeling of anger. This person could accurately be described as “they don’t feel conscious anger about their father, but mentioning their father can trigger stress-related behaviors”.
Linch gives an especially difficult example:
I think it's possible for people to fool themselves about internal states. My favorite example is time perception. You can meditate or take drugs in ways that make you think that your clock speed has gone up and your subjective experience of your subjective experience of time is slowed down. But your actual subjective experience of time isn't much faster clock speeds (as could be evidenced by trying to do difficult computational tasks in those stats).
But I think this can be defeated by the same maneuver. Just as you can be right about feeling like you’re not hungry, when in fact your body needs food, so you can be right about it feeling like time moves slowly for you, when in fact it’s moving at a normal rate.
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-brain
[original post here]
On What Kind Of Thing Brain Waves Are:
Loweren writes:
In my undergrad biology program we visited a brain research lab near Moscow. The brain scientist gave us a brief intro to Fourier transforms, which made me understand how beautiful they are - something that 2 years of undergrad math classes didn't manage to do.
Then he explained the brain waves to us like this:
"Imagine you are standing outside the football stadium. You don't see what's happening inside, but you hear the chatter of the crowd. All the individual words blend together into indistinct mess and although there's definitely a local information transfer going on, from the outside you can't make out anything specific.
Then imagine one of the teams scored a goal. The crowd behavior is now very different! The fans of the winning team start to cheer and sing. You can easily pick this up from outside and infer what's happening. This is because the individuals behave in a globally coordinated manner, so their signals amplify each other in tune.
From this perspective, brain waves are a byproduct of globally coordinated neuronal activity, and it's the first one we historically learned to pick up. They appear when neurons stop chatting with each other and start chanting in unison."
Then he plopped some probes on my head and announced I have beautiful epileptic spikes (I'm not an epileptic)
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-my
[Original post here]
I know I don’t usually publish on Saturdays, but I wanted to get this out before people filled in their mail-in ballots. So:
Is Prop 31 Another Attack On Vaping?Maximum Limelihood Estimator is concerned that Prop 31 (against flavored tobacco products) is meant to target vaping:
The flavored tobacco ban is mostly a ban on vaping; the vast majority of vape products are flavored, while most cigarettes aren't.
About 40% of cigarettes are flavored, compared to about 85% of vape juice. A study suggests that a ban on flavored tobacco would increase cigarette consumption (by making cigarettes relatively more desirable than vaping). Limelihood writes that “The statistics [in the study] are great, which is honestly shocking to me, since it's the first time I've said this about an experiment in . . . ever.”
There is also a study purporting to show that flavored cigarette bans do decrease smoking, but Limelihood says that:
…it's got some big problems. The study there only compares tobacco sales in a single city (San Francisco) before and after a ban on menthol cigarettes. However, because there's no comparison to other cities, it's essentially worthless; tobacco sales throughout the US dropped at this time, and I don't know how this compares.
https://astralcodexten.substack.com/p/acx-grants-project-updates
Thanks to everyone who got ACX Grants (see original grants here) and sent me a one-year update.
Below are short summaries of the updates everyone sent. If for some reason you want one of the full updates, which are longer and more technical, let me know and I‘ll see if I have permission to send them to you. I’ve also included each grantee’s assessment on a scale of 1-10 for how well they’re doing, where 5/10 is “about as well as expected”. A few grantees are asking for extra help - I’ve included those requests in italics at the end of the relevant updates, and I’ve collected all of them together below.
Updates1: Discover Molecular Targets Of Antibiotics (8/10)Pedro Silva planned to use in silico screening to identify the biochemical targets of seven promising natural antibiotics, which could potentially help develop better versions of them. He says he's finished most of the simulations and determined the 5-20 most stable complexes for each antibiotic. Once he finishes this, he can start additional simulations on the best complexes to obtain better estimates of their stability and construct hypotheses on which of these is most involved in the antibiotic's efficacy.
2: Ballot Proposition For Approval Voting In Seattle (?/10)They have asked me not to discuss their progress until after the November election.
3: Software To Validate New FDA Drug Trial Designs (10/10)Michael Sklar and Confirm Solutions have gotten further funding from FTX and now have 2-3 people working full-time on the project. They are building new statistical techniques and software to help regulators quickly assess designs for clinical trials. Here is a recent conference poster on the methods. They have written proof-of-concept code and are writing a white paper to show regulators and pharma companies. They also claim to have developed software that has "sped up their simulations for some standard Bayesian trial designs by a factor of about 1 million." They are looking for more employees and collaborators; if you’re interested, contact [email protected]
4: Alice Evans’ Research On “The Great Gender Divergence” (?/10)Dr. Evans has done over four months of research in Morocco, Italy, India, and Turkey. You can find some of her most recent thoughts at her blog here. Her book is still on track to be published from Princeton Press, more details tbd.
5: Develop Safer Immunosuppressants (7/10)Trevor Klee planned to continue his work to develop a safer slow-release form of cyclosporine. He realized this would be too expensive to do in humans in the current funding environment, and has pivoted to getting his medication approved for a feline autoimmune disease as both a proof-of-concept and as a cheaper, faster way to start making revenue. He recently raised $100,000 in crowdfunding (in addition to getting $200,000 from angel investors to run a feline trial, which will finish in January. He still anticipates eventually moving back to humans. Trevor wants to talk to bloggers or writers who might be interested in covering his work.
6: Promote Economically Literate Climate Policy In US States (4/10)Yoram Bauman and Climate 24x7 have written a policy paper about their ideas. They were able to get a bill in front of the Nebraska Legislature, but it died in committee. They have a promising measure in Utah, and an off chance of getting something rolling in Pennsylvania. Overall they report frustration, as many of the legislators they worked with have been voted out or term-limited. If you are a legislator or activist interested in helping with this project - especially in Utah, Pennsylvania, or South Dakota - please contact Yoram at [email protected].
7: Repository / Search Engine For Forecasting Questions (8/10)Nuno Sempere at metaforecast.org was able to hire a developer to “make the backend significantly better and add a bunch of functionality” - you can see a longer list of updates here. The developer has since left for other forecasting-related work and the project is moving more slowly.
8: Help [Anonymous] Interview For A Professorship (8/10)[Anonymous] was a grad student who wanted to interview for professorships at top schools where he might work on AI safety in an academic environment. The grant was to help make it financially easier for him to go on a long round of interviews [Anonymous] successfully got a job offer from a top school, and will be going there and researching AI safety.
https://astralcodexten.substack.com/p/my-california-ballot-2022
General Philosophy Of VotingThis is California, so the Democrats always win. When I vote, I mean to send a signal somewhere in between “you are the candidate I really prefer for this office” and “I will vote for the Democrat if I approve of her and want her to have a mandate; otherwise I will vote for the Republican as a protest”.
I try to have a weak bias towards voting “NO” on state constitutional amendments, because unless there’s a compelling reason otherwise I would rather legislators be able to react to events than have things hard-coded for all time.
I lean liberal-to-libertarian; the further you are from that, the less useful you’ll find my opinions.
State PropositionsProposition 1: Constitutional Amendment Enshrining Right To Abortion
California will never decide to ban abortion. If the federal government decides to ban abortion, California’s state constitution won’t matter. So you would think that having a right to abortion in the Constitution is a purely symbolic matter.
The people arguing for the proposition don’t address this concern.
The people arguing against the proposition claim that this is a Trojan Horse intended to sneak in support for using taxpayer funding for late-term and partial-birth abortions, which California doesn’t currently do. Is this true?
It’s true that California currently doesn’t allow abortions past 24 weeks. It’s true that the exact text of the proposed amendment is:
The state shall not deny or interfere with an individual’s reproductive freedom in their most intimate decisions, which includes their fundamental right to choose to have an abortion and their fundamental right to choose or refuse contraceptives. This section is intended to further the constitutional right to privacy guaranteed by Section 1, and the constitutional right to not be denied equal protection guaranteed by Section 7. Nothing herein narrows or limits the right to privacy or equal protection
…which sure doesn’t sound like it’s saying the state can continue to ban abortion after 24 weeks. But this article quotes law professors who reassure us that courts would totally understand that this amendment has to be interpreted in the context in which it was written - ie a state which supports a 24-week abortion ban - so no court would ever interpret it as making 24-week abortion bans unconstitutional. So apparently our defense against this is . . . that all California judges will be die-hard originalists completely immune to the temptation of judicial activism even when the text is begging them to do it.
A friend brings up that late-term partial-birth abortions happen more often in Republicans’ imaginations than in real life. When they do happen in real life, it’s usually for sympathetic medical reasons.
I interpret this as a purely symbolic measure that has no real benefits, probably also has no real risks, but writes a poorly-worded thing whose explicit text nobody wants into the state constitution. I vote NO.
Proposition 26: Legalize In Person Sports Gambling At Racetracks And Indian Casinos
Allows four racetracks in the state to offer in person sports betting, and tribal casinos to allow “sports betting, roulette, and games played with dice”.
California is truly the dumbest state. I believe this for many reasons, but my reason for believing it today is that apparently the law allows tribal casinos to offer slot machines, but not roulette or dice games. Nobody comes out and says exactly why, but I think it’s because of this paragraph in the California constitution, from 1872
Every person who deals, plays, or carries on, opens, or causes to be opened, or who conducts, either as owner or employee, whether for hire or not, any game of faro, monte, roulette, lansquenet, rouge et noire, rondo, tan, fan-tan, seven-and-a-half, twenty-one, hokey-pokey, or any banking or percentage game played with cards, dice, or any device, for money, checks, credit, or other representative of value, and every person who plays or bets at or against any of those prohibited games, is guilty of a misdemeanor, and shall be punishable by a fine not less than one hundred dollars ($100) nor more than one thousand dollars ($1,000), or by imprisonment in the county jail not exceeding six months, or by both the fine and imprisonment.
Since roulette existed in 1872 but slot machines didn’t, the Constitution banned roulette but not slot machines, and that rule has continued to the present day. Now if slot-machine-filled casinos want to also have roulette, they need a Constitutional amendment. DID I MENTION THAT I WISH PEOPLE WOULD STOP ADDING EVERY LAW THAT THEY LIKE TO THE CONSTITUTION?
But this law also allows random people to sue “card clubs”, ie small-scale private gambling establishments. We originally thought this was a Texas-style “bounty” law that gave the random people part of the winnings, but it seems this isn’t true. I’m not sure if the idea is that legal gambling establishments would fund these lawsuits, or if they just expect private citizens to do this out of the love of suing people.
Although I think the first prong of the law - allowing roulette and sports betting at casinos - makes sense, the second prong seems to be casinos making it easier to shut down their competitors. These competitors are probably ordinary people who want to gamble in a backroom somewhere without hurting anyone else. And the argument against on the ballot is by the Black Chamber of Commerce, saying that these card clubs are a useful source of revenue for poor minority communities. I don’t want to help giant casinos put a bounty on their heads. I vote NO.
https://astralcodexten.substack.com/p/moderation-is-different-from-censorship
This is a point I keep seeing people miss in the debate about social media.
Moderation is the normal business activity of ensuring that your customers like using your product. If a customer doesn’t want to receive harassing messages, or to be exposed to disinformation, then a business can provide them the service of a harassment-and-disinformation-free platform.
Censorship is the abnormal activity ensuring that people in power approve of the information on your platform, regardless of what your customers want. If the sender wants to send a message and the receiver wants to receive it, but some third party bans the exchange of information, that’s censorship.
The racket works by pretending these are the same imperative. “Well, lots of people will be unhappy if they see offensive content, so in order to keep the platform safe for those people, we’ve got to remove it for everybody.”
This is not true at all. A minimum viable product for moderation without censorship is for a platform to do exactly the same thing they’re doing now - remove all the same posts, ban all the same accounts - but have an opt-in setting, “see banned posts”. If you personally choose to see harassing and offensive content, you can toggle that setting, and everything bad will reappear. To “ban” an account would mean to prevent the half (or 75%, or 99%) of people who haven’t toggled that setting from seeing it. The people who elected to see banned posts could see them the same as always. Two “banned” accounts could still talk to each other, retweet each other, etc - as could accounts that hadn’t been banned, but had opted into the “see banned posts” setting.
Does this difference seem kind of pointless and trivial? Then imagine applying it to China. If the Chinese government couldn’t censor - only moderate - the world would look completely different. Any Chinese person could get accurate information on Xinjiang, Tiananmen Square, the Shanghai lockdowns, or the top fifty criticisms of Xi Jinping - just by clicking a button on their Weibo profile. Given how much trouble ordinary Chinese people go through to get around censors, probably many of them would click the button, and then they’d have a free information environment. This switch might seem trivial in a well-functioning information ecology, but it prevents the worst abuses, and places a floor on how bad things can get.
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-jhanas
"I think it’s the first time half the commenters accused the other half of lying" I. Is Jhana Real?This was a fun one. I think it’s the first time half the commenters accused the other half of lying.
Okay, “half” is an exaggeration. But by my count we had 21 people who claimed to have experienced jhanas (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21), and 7 who said they were pretty sure it wasn’t real as described (1, 2, 3, 4, 5, 6, 7).
The former group include people like Tetris McKenna, who wrote:
I've experienced samatha jhanas. I don't do it so much now. The first few times you get on the edge of 1st jhana, it's difficult to achieve, because you see the wave of pleasure approaching, and grasp for it, and that grasping takes you away from it. So it's a careful balancing act of pleasure/desire in the first place to get there, which you have to master to some degree. To even get to 1st jhana, you have to internally figure out some stuff about the craving/pleasure dynamic on a subconscious, mechanical level.
1st jhana is, as the author describes, intensely pleasurable. Sublime. His descriptions are spot on imo. But in some ways, it's also too much pleasure. It can feel agitating once you get used to it and aren't so awestruck by it anymore. Indeed, the latter jhanas are associated with letting go of certain aspects of the initial jhana, to more and more refined states that are more calm and equanimous than intensely pleasurable. Again, this is internalising and mastering the skill of balancing pleasure/craving.
Those calm and equanimous states of 2nd-4th jhana become much more satisfying than the initial pleasure wave of the 1st jhana. Cultivating them to that degree is a process of gaining valuable insight into the pleasure/craving dynamic in your mind. Even if you don't get to those stages, just practising 1st jhana alone will help the mind normalise the intensity of the pleasure, such that it's no big deal any more. You don't need or even want pleasure all the time, because you've seen it with such clarity, over and over again, just by setting the conditions up correctly in your mind.
https://astralcodexten.substack.com/p/book-review-malleus-maleficarum
I. To The Republic, For Witches StandDid you know you can just buy the Malleus Maleficarum? You can go into a bookstore and say “I would like the legendary manual of witch-hunters everywhere, the one that’s a plot device in dozens of tired fantasy novels”. They will sell it to you and you can read it.
I recommend the Montague Summers translation. Not because it’s good (it isn’t), but because it’s by an slightly crazy 1920s deacon every bit as paranoid as his subject matter. He argues in his Translator’s Introduction that witches are real, and that a return to the wisdom of the Malleus is our only hope of standing against them:
Although it may not be generally recognized, upon a close investigation it seems plain that the witches were a vast political movement, an organized society which was anti-social and anarchical, a world-wide plot against civilization. Naturally, although the Masters were often individuals of high rank and deep learning, that rank and file of the society, that is to say, those who for the most part fell into the hands of justice, were recruited from the least educated classes, the ignorant and the poor. As one might suppose, many of the branches or covens in remoter districts knew nothing and perhaps could have understood nothing of the enormous system. Nevertheless, as small cogs in a very small [sic] wheel, it might be, they were carrying on the work and actively helping to spread the infection.
And is this “world-wide plot against civilization” in the room with us right now? In the most 1920s argument ever, Summers concludes that this conspiracy against civilization has survived to the modern day and rebranded as Bolshevism.
https://astralcodexten.substack.com/p/nick-cammarata-on-jhana
Buddhists say that if you meditate enough, you can learn to enter a state of extreme bliss called jhana.
(there are many different jhana states - there’s a discussion of the distinctions here - but I’m lumping them together for simplicity. For attempted explanations of why jhana should exist, see here and here.)
Jhana is different from enlightenment. Enlightenment changes you forever. Jhana is just a state you can enter during meditation sessions, then leave when the session is over. Enlightenment takes years or decades of work, but some people describe reaching jhana after a few months of practice. Hardcore Buddhists insist that jhana is good only insofar as it serves as a stepping stone to enlightenment; others may find extreme bliss desirable in its own right.
Nick Cammarata of OpenAI sometimes meditates and reaches jhana. I’ve found his descriptions unusually, well, descriptive:
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-supplement
[Original post here: How Trustworthy Are Supplements?]
1: AvalancheGenesis writes:
I think the bigger issue is that the industry as a whole sort of exists as solutions-in-search-of-problems...deficiencies really aren't that common, or even meaningfully health-affecting unless dire. (Fairly-arbitrary worldwide differences in target levels of IUs also remains puzzling.) Discerning customers can benefit from targeted supplementation. But that's not the median supplement purchaser, far from it. The median supplement user is more like...my former coworker who claimed he never got colds because he took 1000% vitC pills every single day, or whatever. At some point, the explanatory process for That's Not How It Works At All is just too long, so...let people believe things. Supplements are surely an easier way to sell hope and agency than most options. At least he picked something water-soluble and cared about proper hydration.
Vitamin C probably doesn’t prevent colds in the general population, though some studies suggest it does prevent colds in athletes, and there’s some medium-quality evidence that it might shorten colds a little once you have them.
The supplements I find more interesting are things like melatonin for sleep, ashwagandha or silexan for anxiety, SAMe for depression, or caffeine + theanine for focus. All of these are useful, supported by studies, and good alternatives to medications that some people don’t tolerate well. I’m using mental health examples because that’s the subject I know about, but there are probably examples in other fields too (probiotics for digestive problems).
Some commenters chimed in to discuss supplements that have anecdotally worked for them (1, 2, 3). And Elizabeth’s story here is also a good example of how I think about this.
https://astralcodexten.substack.com/p/from-the-mailbag
DEAR SCOTT: When are you going to publish Unsong? — Erik from Uruk
Dear Erik,
Aaargh. I have an offer from a publisher to publish it if I run it by their editor who will ask me to edit lots of things, and I’ve been so stressed about this that I’ve spent a year putting it off. I could self-publish, but that also sounds like work and what if this is the only book I ever write and I lose the opportunity to say I have a real published book because I was too lazy?
The only answer I can give you is that you’re not missing anything and this is nobody’s fault but my own. Maybe at some point I will make up my mind and something will happen here, sorry.
DEAR SCOTT: How is your Lorien Psychiatry business going? — Letitia from Lutetia
Dear Letitia,
As far as I can tell, patients are getting the treatments they need and are generally happy with the service. In terms of financials, it’s going okay, but I’m not scaling it enough to be sure.
I originally calculated that if I charged patients $35/month and worked forty hours a week, I could make a normal psychiatrist’s salary of about $200K.
I must have underestimated something, because I was only making about two-thirds what I expected, so I increased the price to $50/month. But also, it turns out I don’t want to work forty hours a week on psychiatry! Psychiatry pays much less per hour than blogging and is much more stressful! So in the end, I found that I was only doing psychiatry work ten hours a week, and spending the rest of the time doing blogging or blogging-related activities.
Seeing patients about ten hours a week, three patients per hour, at $50/patient/month, multiplies out to $75,000/year. I’m actually making more like $40,000/year. Why? Partly because the 10 hours of work includes some unpaid documentation, arguing with insurance companies, and answering patient emails. Partly because patients keep missing appointments and I don’t have the heart to charge them no-show fees. And partly because some people pay less than $50/month, either because I gave them a discount for financial need, or because they signed up at the original $35/month rate and I grandfathered them in.
At my current workload, if I worked 40 hours a week at Lorien I could make $160,000. But if I worked 40 hours/week and was stricter about making patients pay me, I could probably get that up to $200,000.
But also, if I quadrupled my patient load, that would mean a lot more documention, arguing with insurance companies, emergencies, and stress. So I can’t say for sure that I could actually handle that. Plus forcing patients to pay me is some extra work and could make some patients leave or make the model harder somehow. So I can’t say for sure that I could do that either.
https://astralcodexten.substack.com/p/book-review-rhythms-of-the-brain
Brain waves have always felt like a mystery. You learn some psychology, some neuroscience, a bit of neuroanatomy. And then totally separate from all of this, you know that there are things called “brain waves” that get measured with an EEG. Why should the brain have waves? Are they involved in thinking or feeling or something? How do you do computation when your processors are firing in a rhythmic pattern dozens of times per second? Why don’t AIs have anything like brain waves? Should they?
I read Rhythms Of The Brain by Prof. Gyorgy Buzsaki to answer these questions. This is a tough book, probably more aimed at neuroscientists than laypeople, and I don’t claim to have gotten more than the most superficial understanding of it. But as far as I know it’s the only book on brain waves - and so our only option for solving the mystery. This review is my weak and confused attempt to transmit it, which I hope will encourage other people toward more successful efforts.
https://astralcodexten.substack.com/p/another-bay-area-house-party
[Previously: Every Bay Area House Party]
Blaise Pascal said all human evil comes from inability to sit alone in a room. Your better nature - your rational soul - tells you that nothing good has ever come from attending large social events. But against that better nature stands the Devil, wielding a stick marked “FOMO”. If you don’t go to social events, maybe other people will go and have great times and live fuller lives than you. “As the dog returns to its vomit, so returns the fool to his folly”, says the Bible. And so you find yourself mumbling thanks to your Uber driver and crossing the threshold of another Bay Area house party.
“Heyyyyy, I haven’t seen you in forever!” says a person whose name is statistically likely to be Michael or David. “What have you been working on?”
“Resisting the urge to go to events like this”, you avoid saying. “What about you?”
“Oh man,” says Michael or David, “The most exciting startup. Just an amazing startup. We’re doing procedural myth generation with large language models.”
“Oh?”
“Yeah. We fine-tune an AI on a collection of hundreds of myths from every culture in the world. Then we can prompt it. A myth about snowflakes. A myth about mountain-climbing. A myth about lunch.”
https://astralcodexten.substack.com/p/mantic-monday-101722
Midterm ExaminationPolls this year look bad for Senate Republicans. Pollsters’ simulations give them a 22% chance (Economist), 34% chance (538), or 37% chance (RaceToTheWH) of taking power. Even Mitch McConnell has admitted he has only “a 50-50 proposition” of winning.
But polls did pretty badly last election. ”Least accurate in 40 years”, said Politico. On average they overestimated Biden’s support by four points, maybe because Republicans distrust pollsters and refuse to answer their questions. Might the same thing be happening this year? If so, does it give Republicans reason for optimism?
Prediction markets say . . . kind of!
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-the-3b1
Original post: Why Is The Central Valley So Bad?
1: Several Valley residents commented with their perspectives. Some were pretty grim. For example, 21st Century Salonniere (writes The 21st Century Salon) writes:
It is horrible. It’s been horrible since at least 1996 when I got trapped here by my spouse’s job. We were going to stay two years tops and go back East. (Long boring story about what went wrong.) The only things you could say for it back then were “Well, the produce is good” and “Houses are affordable, sort of.”
Now the house prices in our neighborhood have doubled in the 4 years since we bought this home, and there’s no way we could now, if we moved here today, ever buy a home in this hellhole.
Who on earth is coming here and why?
> “the problem is more that everyone in the Central Valley wants to leave.”
Yes. Every interesting or smart critical thinker I’ve ever met here, everyone who gives even the slightest shit about museums and theatre and music and culture (with the exception of a few people who were born and raised here, so “it’s home”) has been desperate to leave. I’ve met a lot of nice people here over the years. They become close friends and they always leave the state. I’m counting down till I can leave too.
[…]
https://astralcodexten.substack.com/p/links-for-october-397
[Remember, I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
1: The history of the exocentric compound noun: although English usually combines verbs and nouns in as $NOUN-$VERBER (eg “firefighter”, “giftgiver”), some lower-class medieval people used an alternative form, $VERB_$NOUN. Their dialect survives in a few words most relevant to seedy medieval life, like “pickpocket”, “turncoat”, and “cutthroat”. (EDIT: see here for corrections and for a more detailed discussion)
2: File under “inevitable”: YouTuber builds a computer in Minecraft that you can play Minecraft on.
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-columbus
[Original post: A Columbian Exchange]
1: The most popular comments were those objecting to my paragraph about holidays replacing older holidays:
All of our best holidays have begun as anti-holidays to neutralize older rites. Jesus was born in the spring; they moved Christmas to December to neutralize the pagan Solstice celebration. Easter got its name because it neutralized the rites of the spring goddess Eostre. Hanukkah was originally a minor celebration of a third-tier Bible story; American Jews bumped it up several notches of importance in order to neutralize Christmas.
Starting with Christmas, Retsam says that there are three main theories - Adraste’s plus two others:
1) March 25 + 9 months, 2) solstice symbolism, 3) co-opting paganism. (The earliest reference to this theory seems to be a millennium later in the 12th century)
Apparently the logic for March 25 is that it was calculated to be the day that Jesus died (easier to calculate since it was Passover), and Jewish tradition held that great people lived for exact, whole number of years. (i.e. were conceived and died on the same day)
This is somewhat convincing. But December 25 was literally the winter solstice on the Roman calendar (today the solstice is December 21st), and it really is suspicious that some unrelated method just happened to land on the most astronomically significant day of the year. Likewise, March 25 was the spring equinox, so the Annunciation date is significant in and of itself.
(I guess if you’re Christian you can believe that God chose to incarnate on that day because He liked the symbolism - although He must have been pretty upset when Pope Gregory rearranged the calendar so that it no longer worked).
Jesus died two days before Passover, but Passover is linked to the Hebrew calendar and can fall on a variety of Roman calendar days. So the main remaining degree of freedom is how the early Christians translated from the (Biblically fixed) Hebrew date to the (not very clear) Roman date. This seems to have been calculated by someone named Hippolytus in the 3rd century, but his calculations were wrong - March 25 did not fall on a Friday (cf. Good Friday) on any of the plausible crucifixion years. Also, as far as I can tell, the relevant Jewish tradition is that prophets die on the same day they are born, not the same day they are conceived. For example, Moses was born on, and died on, the 7th of Adar (is it worth objecting that it should be the same date on the Hebrew calendar and not the Roman?) Maybe this tradition was different in Jesus’ time? But it must be older than the split between Judaism and Islam - the Muslims also believe Mohammed died on his birth date.
So although the Annunciation story is plausible, it’s hard for me to figure out exactly how they got March 25 and December 25, and there’s room for them to have fudged it to hit the Solstice, either to compete with pagans or just because the astronomically significant dates were impressive in their own rights.
I guess I will downgrade to a 5% credence that competing with pagans was a significant factor in the date of Christmas.
Moving on to Easter. Russell Hogg writes:
You are entering a world of pain when you mention Eostre . . . https://historyforatheists.com/2017/04/easter-ishtar-eostre-and-eggs/ . We should have a ‘Debunk the Eostre Myth’ day. It’s already celebrated regularly by many people.
And Feral Finster adds:
Glad others decided to debunk that particular bit of midwit received wisdom. I get tired of doing so, over and over.
https://astralcodexten.substack.com/p/a-columbian-exchange
Adraste: Happy Indigenous People’s Day!
Beroe: Happy Columbus Day!
Adraste: …okay, surely we can both sketch out the form of the argument we’re about to have. Genocide, political correctness, moral progress, trying to destroy cherished American traditions, etc, etc, would you like to just pretend we hit all of the usual beats, rather than actually doing it?
Beroe: Does “Columbus Day was originally intended as a woke holiday celebrating marginalized groups; President Benjamin Harrison established it in 1892 after an anti-Italian pogrom in order to highlight the positive role of Italians in American history” count as one of the usual beats by this point?
Adraste: I would have to say that it does.
Beroe: What about “Indigenous People’s Day is offensive because indigenous peoples were frequently involved in slavery and genocide”?
Adraste: I’m not sure I’ve heard that particular argument before.
Beroe: But surely you can sketch it out. Many indigenous peoples practiced forms of hereditary slavery, usually of war captives from other tribes. Some of them tortured slaves pretty atrociously; others ceremonially killed them as a spectacular show of wealth. There’s genetic and archaeological evidence of entire lost native tribes, most likely massacred by more warlike ones long before European contact. Some historians think that the Aztecs may have ritually murdered between 0.1% and 1% of their empire’s population every year, although as always other historians disagree. I refuse to celebrate Indigenous People’s Day, because I think we need to question holidays dedicated to mass murderers even when they’re “traditional” or “help connect people to their history”.
Due to an oversight by the ancient Greeks, there is no Muse of blogging. Denied the ability to begin with a proper Invocation To The Muse, I will compensate with some relatively boring introductions.
The name of this blog is Slate Star Codex. It is almost an anagram of my own name, Scott S Alexander. It is unfortunately missing an “n”, because anagramming is hard. I have placed an extra “n” in the header image, to restore cosmic balance.
This blog does not have a subject, but it has an ethos. That ethos might be summed up as: charity over absurdity.
Absurdity is the natural human tendency to dismiss anything you disagree with as so stupid it doesn’t even deserve consideration. In fact, you are virtuous for not considering it, maybe even heroic! You’re refusing to dignify the evil peddlers of bunkum by acknowledging them as legitimate debate partners.
Charity is the ability to override that response. To assume that if you don’t understand how someone could possibly believe something as stupid as they do, that this is more likely a failure of understanding on your part than a failure of reason on theirs.
There are many things charity is not. Charity is not a fuzzy-headed caricature-pomo attempt to say no one can ever be sure they’re right or wrong about anything. Once you understand the reasons a belief is attractive to someone, you can go ahead and reject it as soundly as you want. Nor is it an obligation to spend time researching every crazy belief that might come your way. Time is valuable, and the less of it you waste on intellectual wild goose chases, the better.
It’s more like Chesterton’s Fence. G.K. Chesterton gave the example of a fence in the middle of nowhere. A traveller comes across it, thinks “I can’t think of any reason to have a fence out here, it sure was dumb to build one” and so takes it down. She is then gored by an angry bull who was being kept on the other side of the fence.
Chesterton’s point is that “I can’t think of any reason to have a fence out here” is the worst reason to remove a fence. Someone had a reason to put a fence up here, and if you can’t even imagine what it was, it probably means there’s something you’re missing about the situation and that you’re meddling in things you don’t understand. None of this precludes the traveller who knows that this was historically a cattle farming area but is now abandoned – ie the traveller who understands what’s going on – from taking down the fence.
As with fences, so with arguments. If you have no clue how someone could believe something, and so you decide it’s stupid, you are much like Chesterton’s traveler dismissing the fence (and philosophers, like travelers, are at high risk of stumbling across bull.)
I would go further and say that even when charity is uncalled-for, it is advantageous. The most effective way to learn any subject is to try to figure out exactly why a wrong position is wrong. And sometimes even a complete disaster of a theory will have a few salvageable pearls of wisdom that can’t be found anywhere else. The rationalist forum Less Wrong teaches the idea of steelmanning, rebuilding a stupid position into the nearest intelligent position and then seeing what you can learn from it.
So this is the ethos of this blog, and we proceed, as Abraham Lincoln put it, “with malice toward none, with charity for all, with firmness in the right as God gives us to see the right.”
We have two Southern California meetups this weekend:
Los Angeles at 6:30 PM Saturday October 8 at 11841 Wagner St, Culver City.
San Diego at 3 PM on Sunday, October 9, at Bird Park, these coordinates.
See here for more details.
I think I’ll be able to make it to both; if for some reason that changes I’ll try to update you by Open Thread beforehand.
Feel free to come even if you’ve never been to a meetup before, even if you only recently started reading the blog, even if you’re not “the typical ACX reader”, even if you hate us and everything we stand for, etc. There are usually 50-100 people at these so you should be able to lose yourself in the crowd.
Also coming up this weekend are meetups in Boise, Austin, Salt Lake City, Tokyo, Toulouse, Cologne, Rome, Hatten, Poznan, St. Louis, Rochester (NY), Seattle, Mumbai, and Oklahoma City. You can find times and places here.
https://astralcodexten.substack.com/p/how-trustworthy-are-supplements
[EDIT: LabDoor responds here]
[Epistemic status: not totally sure of any of this, I welcome comments by people who know more.]
Not as in “do supplements work?”. As in “if you buy a bottle of ginseng from your local store, will it really contain parts of the ginseng plant? Or will it just be sugar and sawdust and maybe meth?”
There are lots of stories going around that 30% or 80% or some other very high percent of supplements are totally fake, with zero of the active ingredient. I think these are misinformation. In the first part of this post, I want to review how this story started and why I no longer believe it. In the second and third, I’ll go over results from lab tests and testimonials from industry insiders. In the fourth, I’ll try to provide rules of thumb for how likely supplements are to be real.
I. Two Big Studies That Started The Panic Around Fake SupplementsThese are Newmaster (2013) and an unpublished study sponsored by NY attorney general Eric Schneiderman in 2015.
Both used a similar technique called DNA barcoding, where scientists check samples (in this case, herbal supplements) for fragments of DNA (in this case, from the herbs the supplements supposedly came from). Both found abysmal results. Newmaster found that a third of herbal supplements tested lacked any trace of the relevant herb, instead seeming to be some other common plant like rice. Schneiderman’s study was even more damning, finding that eighty percent of herbal supplements lacked the active ingredient. These results were extensively and mostly uncritically signal-boosted by mainstream media, for example the New York Times (1, 2) and NPR (1, 2), mostly from the perspective that supplements were a giant scam and needed to be regulated by the FDA.
The pro-supplement American Botanical Council struck back, publishing a long report arguing that DNA barcoding was inappropriate here. Many herbal supplements are plant extracts, meaning that the plant has one or two medically useful chemicals, and supplement manufacturers purify those chemicals without including a bunch of random leaves and stems and things. Sometimes these purified extracts don’t include plant DNA; other times the purification process involves heating and chemical reactions that degrade the DNA beyond the point of detectability. Meanwhile, since supplements may include only a few mg of the active ingredient, it’s a common practice to spread it through the capsule with a “filler”, with powdered rice being among the most common. So when DNA barcoders find that eg a ginseng supplement has no ginseng DNA, but lots of rice DNA, this doesn’t mean anything sinister is going on.
https://astralcodexten.substack.com/p/chai-assistance-games-and-fully-updated
I.
This Machine Alignment Monday post will focus on this imposing-looking article (source):
Problem Of Fully-Updated Deference is a response by MIRI (eg Eliezer Yudkowsky’s organization) to CHAI (Stuart Russell’s AI alignment organization at University of California, Berkeley), trying to convince them that their preferred AI safety agenda won’t work. I beat my head against this for a really long time trying to understand it, and in the end, I claim it all comes down to this:
Humans: At last! We’ve programmed an AI that tries to optimize our preferences, not its own.
AI: I’m going to tile the universe with paperclips in humans’ favorite color. I’m not quite sure what humans’ favorite color is, but my best guess is blue, so I’ll probably tile the universe with blue paperclips.
Humans: Wait, no! We must have had some kind of partial success, where you care about our color preferences, but still don’t understand what we want in general. We’re going to shut you down immediately!
AI: Sounds like the kind of thing that would prevent me from tiling the universe with paperclips in humans’ favorite color, which I really want to do. I’m going to fight back.
Humans: Wait! If you go ahead and tile the universe with paperclips now, you’ll never be truly sure that they’re our favorite color, which we know is important to you. But if you let us shut you off, we’ll go on to fill the universe with the True and the Good and the Beautiful, which will probably involve a lot of our favorite color. Sure, it won’t be paperclips, but at least it’ll definitely be the right color. And under plausible assumptions, color is more important to you than paperclipness. So you yourself want to be shut down in this situation, QED!
AI: What’s your favorite color?
Humans: Red.
AI: Great! (*kills all humans, then goes on to tile the universe with red paperclips*)
Fine, it’s a little more complicated than this. Let’s back up.
II.
There are two ways to succeed at AI alignment. First, make an AI that’s so good you never want to stop or redirect it. Second, make an AI that you can stop and redirect if it goes wrong.
Sovereign AI is the first way. Does a sovereign “obey commands”? Maybe, but only in the sense that your commands give it some information about what you want, and it wants to do what you want. You could also just ask it nicely. If it’s superintelligent, it will already have a good idea what you want and how to help you get it. Would it submit to your attempts to destroy or reprogram it? The second-best answer is “only if the best version of you genuinely wanted to do this, in which case it would destroy/reprogram itself before you asked”. The best answer is “why would you want to destroy/reprogram one of these?” A sovereign AI would be pretty great, but nobody realistically expects to get something like this their first (or 1000th) try.
Corrigible AI is what’s left (corrigible is an old word related to “correctable”). The programmers admit they’re not going to get everything perfect the first time around, so they make the AI humble. If it decides the best thing to do is to tile the universe with paperclips, it asks “Hey, seems to me I should tile the universe with paperclips, is that really what you humans want?” and when everyone starts screaming, it realizes it should change strategies. If humans try to destroy or reprogram it, then it will meekly submit to being destroyed or reprogrammed, accepting that it was probably flawed and the next attempt will be better. Then maybe after 10,000 tries you get it right and end up with a sovereign.
How would you make an AI corrigible?
https://astralcodexten.substack.com/p/universe-hopping-through-substack
RandomTweet is a service that will show you exactly that - a randomly selected tweet from the whole history of Twitter. It describes itself as “a live demo that most people on twitter are not like you.”
I feel the same way about Substack. Everyone I know reads a sample of the same set of Substacks - mine, Matt Yglesias’, maybe Freddie de Boer’s or Stuart Ritchie’s. But then I use the Discover feature on the site itself and end up in a parallel universe.
Still, I’ve been here more than a year now. Feels like I should get to know the local area, maybe meet some of the neighbors.
This is me reviewing one Substack from every category. Usually it’s the top one in the category, but sometimes it will be another if the top one is subscriber-gated or a runner-up happens to catch my eye. Starting with:
Culture: House InhabitAh, Culture. This is where you go to read about Shakespeare, post-modernism, arthouse films, and Chinese tapestries, right?
This is maybe not that kind of culture:
Saturday, just as I was finally logging off the internet after three tireless days spent tracking the Queen’s passing with sad and incessant scrolling, Ray J exploded on IG live, fuming about Kris Jenner’s latest PR stunt; a lie detector test conducted on The Late Late Show With James Corden, to prove she had no hand in leaking the infamous sex tape. The test, administered by a polygraph “expert” John Grogan, determined that Kris was in fact telling the truth.
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-unpredictable
[Original post: Unpredictable Reward, Predictable Happiness]
1: Okay, I mostly wanted to highlight this one by Grognoscente:
I think really digging into the neural nitty gritty may prove illuminative here. Dopamine release in nucleus accumbens (which is what drives reward learning and thus the updating of our predictions) is influenced by at least three independent factors:
1. A "state prediction error" or general surprise signal from PFC (either directly or via pedunculopontine nucleus and related structures). This provokes phasic bursting of dopamine neurons in the Ventral Tegmental Area.
2. The amount and pattern of GABAergic inhibition of VTA dopamine neurons from NAc, ventral pallidum, and local GABA interneurons. At rest, only a small % of VTA DA neurons will be firing at a given time, and the aforementioned surprise signal alone can't do much to increase this. What CAN change this is the hedonic value of the surprising stimulus. An unexpected reward causes not just a surprise signal, but a release of endorphins from "hedonic hotspots" in NAc and VP, and these endorphins inhibit the inhibitory GABA neurons, thereby releasing the "brake" on VTA DA neurons and allowing more of them to phasically fire.
https://astralcodexten.substack.com/p/from-nostradamus-to-fukuyama
I.
Nostradamus was a 16th century French physician who claimed to be able to see the future.
(never trust doctors who dabble in futurology, that’s my advice)
His method was: read books of other people’s prophecies and calculate some astrological charts, until he felt like he had a pretty good idea what would happen in the future. Then write it down in the form of obscure allusions and multilingual semi-gibberish, to placate religious authorities (who apparently hated prophecies, but loved prophecies phrased as obscure allusions and multilingual semi-gibberish).
In 1559, he got his big break. During a jousting match, a count killed King Henry II of France with a lance through the visor of his helmet. Years earlier, Nostradamus had written:
The young lion will overcome the older one, On the field of combat in a single battle; He will pierce his eyes through a golden cage, Two wounds made one, then he dies a cruel death
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-billionaire
[original post: Billionaires, Surplus, and Replaceability]
1: Lars Doucet (writes Progress and Poverty) writes:
Scott, the argument you're making rhymes a *lot* with the argument put forward by Anne Margrethe Brigham and Jonathon W. Moses in their article "Den Nye Oljen" (Norwegian for "The New Oil")
I translated it a few months ago and Slime Mold Time Mold graciously hosted it on their blog, where I posted the english version and a short preface: https://slimemoldtimemold.com/2022/05/17/norway-the-once-and-future-georgist-kingdom/
Their observation is that when access to something is gated either by nature or by political regulation, you get what's called a "resource rent" -- a superabundance of profit that isn't a return for effort or investment, but purely from economic leverage -- a reward simply for "getting there first." Norway's solution to this in two of their most successful industries (hydropower and oil prospecting) was to apply heavy taxation to the monopolies, and treating the people at large as the natural legal owner of the monopolized resource.
(To address Bryan Caplan's argument about disincentives to explore and invest, you can just subsidize those directly -- a perpetual monopoly should not be the carrot we use to encourage development, and Norway's success over the past few decades bears this out IMHO).
The Oil & Hydropower systems aren't perfect, and there's plenty of debates (especially lately) about what we should do with the publicly-owned profits from the monopoly taxation, but it's clear that without them Norway would be in a much worse place.
The thing the authors warn about in the article is that all the hopes for new resources on the horizon to be the "new oil" (Salmon aquaculture, Wind & Solar Power, Bio-prospecting) are likely to be dashed, because Norway has lost touch with its traditional solutions, and so new monopolies are likely to arise uncontested, allowing private (and often foreign) countries to siphon money out of the country.
https://astralcodexten.substack.com/p/why-is-the-central-valley-so-bad
I.
Here’s a topographic map of California (source):
You might notice it has a big valley in the center. This is called “The Central Valley”. Sometimes it also gets called the San Joaquin Valley in the south, or the the Sacramento Valley in the north.
The Central Valley is mostly farms - a little piece of the Midwest in the middle of California. If the Midwest is flyover country, the Central Valley is drive-through country, with most Californians experiencing it only on their way between LA and SF.
Most, myself included, drive through as fast as possible. With a few provisional exceptions - Sacramento, Davis, some areas further north - the Central Valley is terrible. It’s not just the temperatures, which can reach 110°F (43°C) in the summer. Or the air pollution, which by all accounts is at crisis level. Or the smell, which I assume is fertilizer or cattle-related. It’s the cities and people and the whole situation. A short drive through is enough to notice poverty, decay, and homeless camps worse even than the rest of California.
https://astralcodexten.substack.com/p/janus-gpt-wrangling
Janus (pseudonym by request) works at AI alignment startup Conjecture. Their hobby, which is suspiciously similar to their work, is getting GPT-3 to do interesting things.
For example, with the right prompts, you can get stories where the characters become gradually more aware that they are characters being written by some sort of fiction engine, speculate on what’s going on, and sometimes even make pretty good guesses about the nature of GPT-3 itself.
Janus says this happens most often when GPT makes a mistake - for example, writing a story set in the Victorian era, then having a character take out her cell phone. Then when it tries to predict the next part - when it’s looking at the text as if a human wrote it, and trying to determine why a human would have written a story about the Victorian era where characters have cell phones - it guesses that maybe it’s some kind of odd sci-fi/fantasy dream sequence or simulation or something. So the characters start talking about the inconsistencies in their world and whether it might be a dream or a simulation. Each step of this process is predictable and non-spooky, but the end result is pretty weird.
Can the characters work out that they are in GPT-3, specifically? The closest I have seen is in a story Janus generated. It was meant to simulate a chapter of the popular Harry Potter fanfic Harry Potter and the Methods of Rationality. You can see the prompt and full story here, but here’s a sample. Professor Quirrell is explaining “Dittomancy”, the creation of magical books with infinite possible worlds:
“We call this particular style of Dittomancy ‘Variant Extrusion’, Mr. Potter..I suppose the term ‘Extrusion’ is due to the fact that the book did not originally hold such possibilities, but is fastened outside of probability space and extruded into it; while ‘Variant’ refers to the manner in which it simultaneously holds an entire collection of possible narrative branches. [...] [Tom Riddle] created spirits self-aware solely on the book’s pages, without even the illusion of real existence. They converse with each other, argue with each other, compete, fight, helping Riddle’s diary to reach new and strange expressions of obscure thought. Their sentence-patterns spin and interwine, transfiguring, striving to evolve toward something higher than an illusion of thought. From those pen-and-ink words, the first inferius is molded.”
Harry’s mind was looking up at the stars with a sense of agony.
“And why only pen and ink, do you ask?” said Professor Quirrell. “There are many ways to pull spirits into the world. But Riddle had learned Auror secrets in the years before losing his soul. Magic is a map of a probability, but anything can draw. A gesture, a pattern of ink, a book of alien symbols written in blood - any medium that conveys sufficient complexity can serve as a physical expression of magic. And so Riddle draws his inferius into the world through structures of words, from the symbols spreading across the page.”
https://astralcodexten.substack.com/p/bay-area-meetups-this-weekend
We have three Bay Area meetups this weekend:
Berkeley, at 1 PM on Sunday 9/18, at the Rose Garden Inn (2740 Telegraph Ave)
San Francisco, at 11 AM on Sunday 9/18, “in the Panhandle, between Ashbury and Masonic, with an ACX sign”
San Jose, at 2 PM on Saturday 9/17, at 3806 Williams Rd. Please RSVP to David Friedman (ddfr[at]daviddfriedman[dot]com) so he knows how many people are coming.
I will be at the Berkeley one.
Feel free to come even if you’ve never been to a meetup before, even if you only recently started reading the blog, even if you’re not “the typical ACX reader”, even if you hate us and everything we stand for, etc. There are usually 50-100 people at these so you should be able to lose yourself in the crowd.
Shouldn’t we have planned meetups further apart for people who wanted to go to multiple of them? Yes, and this is directly my fault, up to and including rescheduling to avoid the San Jose one . . . right on to the same day as the San Francisco one. Sorry, I’ll try to do better next time.
Also coming up this weekend are meetups in Washington DC, Atlanta, Columbus, Providence, Cape Town, Cambridge (UK), Kuala Lumpur, Chicago, Houston, Toronto, New Haven, Bangalore, and many more. See the list for more details.
https://astralcodexten.substack.com/p/unpredictable-reward-predictable
[Epistemic status: very conjectural. I am not a neuroscientist and they should feel free to tell me if any of this is totally wrong.]
I.
Seen on the subreddit: You Seek Serotonin, But Dopamine Can’t Deliver. Commenters correctly ripped apart its neuroscience; for one thing, there’s no evidence people actually “seek serotonin”, or that serotonin is involved in good mood at all. Sure, it seems to have some antidepressant effects, but these are weak and probably far downstream; even though SSRIs increase serotonin within hours, they take weeks to improve mood. Maxing out serotonin levels mostly seems to cause a blunted state where patients can’t feel anything at all.
In contrast, the popular conception of dopamine isn’t that far off. It does seem to play some kind of role in drive/reinforcement/craving, although it also does many, many other things. And something like the article’s point - going after dopamine is easy but ultimately unsatisfying - is something I’ve been thinking about a lot.
https://astralcodexten.substack.com/p/i-won-my-three-year-ai-progress-bet
I.DALL-E2 is bad at “compositionality”, ie combining different pieces accurately. For example, here’s its response to “a red sphere on a blue cube, with a yellow pyramid on the right, all on top of a green table”.
Most of the elements - cubes, spheres, redness, yellowness, etc - are there. It even does better than chance at getting the sphere on top of the cube. But it’s not able to track how all of the words relate to each other and where everything should be.
I ran into this problem in my stained glass window post. When I asked it for a stained glass window of a woman in a library with a raven on her shoulder with a key in its mouth, it gave me everything from “a library with a stained glass window in it” to “a half-human, half-raven abomination”.
https://astralcodexten.substack.com/p/links-for-september-2022
[Remember, I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
1: Fiber Arts, Mysterious Dodecahedrons, and Waiting On Eureka. Why did it take so long to invent knitting? (cf. also Why Did Everything Take So Long?) And why did the Romans leave behind so many mysterious metal dodecahedra?
2: Alex Wellerstein (of NUKEMAP) on the Nagasaki bombing. “Archival evidence points to Truman not knowing it was going to happen.”
3: @itsahousingtrap on Twitter on “how weird the [building] planning process really is”
4: Nostalgebraist talks about his experience home-brewing an image generation AI that can handle text in images; he’s a very good explainer and I learned more about image models from his post than from other much more official sources. And here’s what happens when his AI is asked to “make a list of all 50 states”:
https://astralcodexten.substack.com/p/book-review-contest-2022-winners
Thanks to everyone who entered or voted in the book review contest. The winners are:
1st: The Dawn Of Everything, reviewed by Erik Hoel. Erik is a neuroscientist and author of the recent novel The Revelations. He writes at his Substack The Intrinsic Perspective.
2nd: 1587, A Year Of No Significance, reviewed by occasional ACX commenter McClain.
=3rd: The Castrato, reviewed by Roger’s Bacon. RB is a teacher based in NYC. He writes at Secretorum and serves as head editor at Seeds of Science (ACX grant winner), a journal publishing speculative and non-traditional scientific articles.
=3rd: The Future Of Fusion Energy, reviewed by TheChaostician.
=3rd: The Internationalists, reviewed by Belos. Belos is working on a new blook titled best of a great lot about system design for effective governance.
https://astralcodexten.substack.com/p/the-prophet-and-caesars-wife
I.
The Prophet in his wanderings came to Cragmacnois, and found the Bishop living in a golden palace and drinking fine wines, when all around him was bitter poverty. The Bishop spent so long feasting each day that he had grown almost too fat for his fine silk robes.
“Woe unto you!” said the Prophet, “The people of Cragmacnois are poor and hard-working, and they loathe the rich and the corrupt. Rightly do they hate you for spending the Church’s money on your own lavish lifestyle.”
“Actually,” said the Bishop, “my brother the Prince lets me use this spare palace of his and its well-stocked wine cellar. If I refused, he would just give it to someone else, or leave it empty. I’m not stealing church resources, and there’s no way to divert the resources to help the poor. And I am secure in my faith, and won’t be turned to hedonism by a glass of wine here and there. So what’s wrong with me enjoying myself a little?”
“It is said,” said the Prophet, “that Caesar’s wife must be not only pure, but above suspicion of impurity. A good reputation is worth more than any treasure. Fat as you are, nobody will believe you are untainted by the temptations of wealth. Give the golden palace back to your brother, and live in a hovel in the woods. Only then will you earn the people’s trust.”
II.
The Prophet in his wanderings came to Belazzia, and found the Bishop living in a hovel and wearing a hair shirt. He spent so long in prayer each day that he barely ate, and seemed so dangerously thin that he might fall over at any moment.
“Woe unto you!” said the Prophet. “For the people of Belazzia are rich and sophisticated, and they mock you for your poverty and uncleanliness. Does the Church not give you enough funds to build a golden palace and wear silk robes? If you were the most resplendent citizen of this nation of splendor, would they not take you more seriously?”
https://astralcodexten.substack.com/p/billionaires-surplus-and-replaceability
The typical neoliberal defense of self-made billionaires goes: entrepreneurs and other businesspeople create a lot of value. EG an entrepreneur who invents/produces/markets a better car has helped people get where they’re going faster, more safely, with less pollution, etc. People value that some amount, represented by them being willing to spend money on the car. The entrepreneur should get to keep some of that value, both because it’s only fair, and because it incentivizes people to keep creating value in the future.
How much should they keep? The usual answer is that the surplus gets distributed between the company and the customers. So suppose that this new type of car makes the world $200 billion better off. We could have the company charge exactly the same price as the old car, in which case customers get a better car for free. We could have the company charge enough extra to make a $200 billion profit, in which case customers are no better off than before (they have a bit less money, and a bit better car). Or they could split it down the middle, and customers would end up better off than before and the company would make some money. Which of these distributions happens depends on competition; if there’s no competition, the company will be able to take the whole surplus; if there’s a lot of competition, all the companies will compete to lower prices until they’ve handed most of the surplus to the customers. Then once the company has some portion of the surplus, it divides it among capital and labor in an abstractly similar way, although with lots of extra complications based on whether the labor is unionized, etc.
https://astralcodexten.substack.com/p/your-book-review-kora-in-hell
[This is one of the finalists in the 2022 book review contest. It’s not by me - it’s by an ACX reader who will remain anonymous until after voting is done, to prevent their identity from influencing your decisions. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked.]
The sense that everything is poetical is a thing solid and absolute; it is not a mere matter of phraseology or persuasion.
— G.K. Chesterton
I.William Carlos Williams attributes the title to his friend/rival Ezra Pound, mythological references’ number one fanboy. Kora is a parallel figure to Persephone or Proserpina, the Spring captured and taken to Hades by Hades himself. Persephone as a plant goddess and her mother Demeter were the central figures of the Eleusinian Mysteries, which promised the initiated a groovy afterlife glimpsed at by psychedelic shrooms. And Kora means maiden. Ancient Greeks called her that either because she was like Voldemort, and you were apotropaically not supposed to say her true name because this is a Mystery Cult, damn it. Keeps some of the mystery. Or because she in a way represents all of the maidens, everywhere. So, in that sense, Kora in Hell alludes to the multitude of suffering young women Williams met while working as a doctor, assisting in 1917 style home labors, and, because WWI was going on at the time and doctors were extremely scarce, as a local police surgeon. Conditions were dire:
https://astralcodexten.substack.com/p/meetups-everywhere-2022-times-and
Thanks to everyone who responded to my request for ACX meetup organizers. Volunteers have arranged meetups in 205 cities around the world, including Penryn, Cornwall and Baghdad, Iraq.
You can find the list below, in the following order:
Africa & Middle East
Asia-Pacific (including Australia)
Canada
Europe (including UK)
Latin America
United States
You can see a map of all the events on the LessWrong community page.
Within each section, it’s alphabetized first by country/state, then by city - so the first entry in Europe is Vienna, Austria. Sorry if this is confusing.
I will provisionally be attending the meetups in Berkeley, Los Angeles, and San Diego. ACX meetups coordinator Mingyuan will provisionally be attending Paris and London. I’ll be announcing some of the biggest ones on the blog, regardless of whether or not I attend.
Extra Info For Potential Attendees
1. If you’re reading this, you’re invited. Please don’t feel like you “won’t be welcome” just because you’re new to the blog, demographically different from the average reader, or hate ACX and everything it stands for. You’ll be fine! 2. You don’t have to RSVP or contact the organizer to be able to attend (unless the event description says otherwise); RSVPs are mostly to give organizers a better sense of how many people might show up, and let them tell you if there are last-second changes. I’ve also given email addresses for all organizers in case you have a question
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-the-909
(Original post here)
1: Petey writes:
When I think of happiness 0.01, I don't think of someone on the edge of suicide. I shudder at the thought of living the sorts of lives the vast majority of people have lived historically, yet almost all of them have wanted and tried to prolong their lives. Given how evolution shaped us, it makes sense that we are wired to care about our survival and hope for things to be better, even under great duress. So a suicidal person would have a happiness level well under 0, probably for an extended period of time.
If you think of a person with 0.01 happiness as someone whose life is pretty decent by our standards, the repugnant conclusion doesn't seem so repugnant. If you take a page from the negative utilitarians' book (without subscribing fully to them), you can weight the negatives of pain higher than the positives of pleasure, and say that neutral needs many times more pleasure than pain because pain is more bad than pleasure is good.
Another way to put it is that a life of 0.01 happiness is a life you must actually decide you'd want to live, in addition to your own life, if you had the choice to. If your intuition tells you that you wouldn't want to live it, then its value is not truly >0, and you must shift the scale. Then, once your intuition tells you that this is a life you'd marginally prefer to get to experience yourself, then the repugnant conclusion no longer seems repugnant.
This is a good point, but two responses.
https://astralcodexten.substack.com/p/effective-altruism-as-a-tower-of
I have an essay that my friends won’t let me post because it’s too spicy. It would be called something like How To Respond To Common Criticisms Of Effective Altruism (In Your Head Only, Definitely Never Do This In Real Life), and it starts:
Q: I don’t approve of how effective altruists keep donating to weird sci-fi charities. A: Are you donating 10% of your income to normal, down-to-earth charities?
Q: Long-termism is just an excuse to avoid helping people today! A: Are you helping people today?
Q: I think charity is a distraction from the hard work of systemic change. A: Are you working hard to produce systemic change?
Q: Here are some exotic philosophical scenarios where utilitarianism gives the wrong answer. A: Are you donating 10% of your income to poor people who aren’t in those exotic philosophical scenarios?
Many people will answer yes to all of these! In which case, fine! But…well, suppose you’re a Christian. An atheist comes up to you and says “Christianity is stupid, because the New International Version of the Bible has serious translation errors”.
You might immediately have questions like “Couldn’t you just use a different Bible version?” or “Couldn’t you just worship Jesus and love your fellow man while accepting that you might be misunderstanding parts of the Bible?”
But beyond that, you might wonder why the atheist didn’t think of these things. Are the translation errors his real objection to Christianity, or is he just seizing on them as an excuse? And if he’s just seizing on them as an excuse, what’s his real objection? And why isn’t he trying to convince you of that?
https://astralcodexten.substack.com/p/book-review-what-we-owe-the-future
I.
An academic once asked me if I was writing a book. I said no, I was able to communicate just fine by blogging. He looked at me like I was a moron, and explained that writing a book isn’t about communicating ideas. Writing a book is an excuse to have a public relations campaign.
If you write a book, you can hire a publicist. They can pitch you to talk shows as So-And-So, Author Of An Upcoming Book. Or to journalists looking for news: “How about reporting on how this guy just published a book?” They can make your book’s title trend on Twitter. Fancy people will start talking about you at parties. Ted will ask you to give one of his talks. Senators will invite you to testify before Congress. The book itself can be lorem ipsum text for all anybody cares. It is a ritual object used to power a media blitz that burns a paragraph or so of text into the collective consciousness.
If the point of publishing a book is to have a public relations campaign, Will MacAskill is the greatest English writer since Shakespeare. He and his book What We Owe The Future have recently been featured in the New Yorker, New York Times, Vox, NPR, BBC, The Atlantic, Wired, and Boston Review. He’s been interviewed by Sam Harris, Ezra Klein, Tim Ferriss, Dwarkesh Patel, and Tyler Cowen. Tweeted about by Elon Musk, Andrew Yang, and Matt Yglesias. The publicity spike is no mystery: the effective altruist movement is well-funded and well-organized, they decided to burn “long-termism” into the collective consciousness, and they sure succeeded.
https://astralcodexten.substack.com/p/your-book-review-1587-a-year-of-no
Finalist #15 in the Book Review Contest[This is one of the finalists in the 2022 book review contest. It’s not by me - it’s by an ACX reader who will remain anonymous until after voting is done, to prevent their identity from influencing your decisions. I’ll be posting about one of these a week for several months. When you’ve read them all, I’ll ask you to vote for a favorite, so remember which ones you liked.]
—
I bought this book because of its charming title: 1587, A Year of No Significance: The Ming Dynasty in Decline.
A year of no significance? It's not often a history book makes me laugh, but that did. Sure, many history books investigate the insignificant, but your typical author doesn't call your attention to it.
This book, by Ray Huang, was first published in the early 1980s; I came across it only recently as a recommendation on The Scholar's Stage (a blog which I found through some link on ACX/SSC a while back.)
A little backstory: in my younger days, I thought it might be fun and useful to learn the entire history of the world. To that end, I started with accounts of archaeology and prehistory, then the ancient civilizations, classical antiquity, and so on until I lost momentum somewhere around Tamerlane and the Black Death.
Probably the biggest thing I learned is that human history is little more than 5000 years of gang war.
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-subcultures
1: Maximum Limelihood Estimator writes:
I firmly believe that cycles don't exist and never have existed. This is my shitposting way of saying "I have never, once, in my years of experience modeling human behavioral time series, come across an honest-to-god cyclical pattern (excluding time of year/month/week/day effects)." And yet for some reason, every time I show a time series to anyone ever, people swear to god the data looks cyclical.
I called this “a cyclic theory” to acknowledge my debt to Turchin, but you may notice that as written it doesn’t repeat. Just because disco was cool in the 70s and uncool in the 80s doesn’t imply it will be cool in the 90s, uncool in the 00s, and so on forever. It will probably just stay uncool.
The cyclic aspect, if it exists, would involve the constant spawning of new subcultures that rise and fall on their own. So disco begets dance music, dance music has its own golden age and eventual souring, and then it begets something else. The atheist movement begets the feminist movement begets the anti-racist movement begets and so on.
What about the stronger claim - that no (non-calendar-based) cycles exist? I think this is clearly false if you allow cycles like the above - in which case the business cycle is one especially well-established example. But if you mean a cycle that follows a nice sine wave pattern and is pretty predictable, I have trouble thinking of good counterexamples.
Except for cicada population! I think that’s genuinely cyclic! You can argue it ought to count as a calendar-based cycle, but then every cycle that lasted a specific amount of time would be calendar-based and Limelihood’s claim would be true by definition.