Utilitarianism implies that if we build an AI that successfully maximizes utility/value, we should be ok with it replacing us. Sensible people add caveats related to how hard it’ll be to determine the correct definition of value or check whether the AI is truly optimizing it.
As someone who often passionately rants against the AI successionist line of thinking, the most common objection I hear is "why is your definition of value so arbitrary as to stipulate that biological meat-humans are necessary" This is missing the crux—I agree such a definition of moral value would be hard to justify.
Instead, my opposition to AI successionism comes from a preference toward my own kind. This is hardwired in me from biology. I prefer my family members to randomly-sampled people with similar traits. I would certainly not elect to sterilize or kill my family members so that they could be replaced [...]
---
First published:
May 4th, 2025
Source:
https://www.lesswrong.com/posts/MDgEfWPrvZdmPZwxf/why-i-am-not-a-successionist
Narrated by TYPE III AUDIO.