It's so funny when people say that we could just trade with a superintelligent/super-numerous AI.
We don't trade with chimps. We don't trade with ants. We don't trade with pigs. We take what we want. If there's something they have that we want, we enslave them. Or worse! We go and farm them!
A superintelligent/super-numerous AI killing us all isn't actually the worst outcome of this reckless gamble the tech companies are making with all our lives. If the AI wants something that requires living humans and it's not aligned with our values, it could make factory farming look like a tropical vacation. We're superintelligent compared to animals and we've created hell for trillions of them.
Let's not risk repeating this.
The Most Horrific Alignment Failure In History
The last time a superintelligent species arrived (humans), what happened?
It created literal hell on earth for most mammals, because it was unaligned.
I avoided writing this for months, but I changed my mind when OpenAI announced AGI could be 4 years away, and we saw a sharp increase in 2-5 year predictions by credible researchers.
We need to stare these numbers in the face:
- The 6th Mass Extinction: wild land mammal biomass plummeted by ~85%. Since 1900, wild mammal biomass declined ~70%.
- In the blink of an eye, 96% of mammal biomass became humans or our slaves. We literally farm them for energy, like in The Matrix
- It’s not just mammals, wild bird biomass is down to 29%. Most birds now live short, tortured lives on factory farms.
- Our slaves experience a never-ending holocaust. Imagine living in a feces-filled cage so tiny you can’t even turn around. Your bones break from non-use until you can’t even stand up anymore, you’re injected with grotesque drugs, then you’re sent into a literal gas chamber.
- 99.9% of species have gone extinct - this is the default outcome. Remember this when someone says AI x-risk is an “extraordinary claim requiring extraordinary evidence”
---
Takeaway: AGI could be the best thing we ever invent, but there are outcomes worse than extinction. Humanity is playing with fire. We can’t retreat into hopium dens. We can’t fix alignment risks we don’t acknowledge.
5 fallacies:
1) The “AIs Would Have To Want To Kill Us” Fallacy
2) The “Superintelligent Means Like 5% Smarter Than Me” Fallacy
3) The “ASIs Will Trade With Mere Humans Instead Of Taking Whatever the Fuck They Want” Fallacy
4) The “ASIs Will Only Kill Us After They Finish Colonizing The Universe” Fallacy
5) The “Mere Humans Are Totally Gonna Be Able to Keep Up With Machiavellian Superintelligences And Play Them Off Each Other” Fallacy
AI doomer: 10-90% chance, will update priors based on new information.
Effective Accelerationist: 0% chance, nothing will change my mind.
Which one is the religious cult filled with true believers again?
Drug company: We invented a drug that could solve all health problems, we’re putting it in the water supply now.
Society: You're WHAT?
Drug company: Haha I know right. We have to though because our competitors are doing it too. Fuckin moloch lol.
Society: The fuck?? Surely you at least safety tested it for 10-15 years first, like other drugs? Even pimple drugs?
Drug company: Uh… 6 months. But that’s more than some of our competitors! We’re the safety guys :)
Society: You can’t be serious. People could get sick!
Drug company: Actually, we think there’s a 10% chance everyone will die.
---
This is why 82% of Americans want to slow down AI vs just 8% who want to speed it up.