You Can’t Prove Aliens Aren’t On Their Way To Destroy The Earth (A Comprehensive Takedown Of The Doomer View Of AI)

Prepare to be offended! This is an irreverent take down of the AI doomer view of the world. I do love all you doomers but my way of showing it is to be mean to you. If it helps I wrote this in bed while eating M&Ms and prosciutto straight from the packet so when you get mad just picture that and you’ll realise you’re punching down….

The problem with debating true believers is that they know so much more about their subject that you do. Sam Harris made this point in his recent interview with Lex. He said that he wasn’t the right person to debate a 9/​11 truther because the conspiracy theorists would inevitably bring up lots of ‘evidence’ that Sam had never heard before. The example he gave was “Why were US fighter jets across the Eastern seaboard when they weren’t scheduled to be out that day.” Presented with something like this Sam would have no answer and would look stupid.

This is how it feels wading into the debate around AI doomerism. Any sceptic is thrown a million convincing sounding points all of which presuppose things that are fictional. Debating ‘alignment’ for example means you’ve already bought into their belief that we will lose control of computers so you’re already losing the debate.

It is like arguing with a Christian about bible passages. If you get down in the reeds with her you can never win. You have to take a step back and look at the bigger picture and that’s why the Flying Spaghetti Monster was invented.

“So you worship food?”

“No! The food spaghetti is just a representation of our Lord and Saviour. It’s his body and bolognese.”

The Flying Spaghetti Monster exists to shift the burden of proof and effort in a debate. Instead of us working hard to argue against all their reasons that God exists we simply apply their logic to the Flying Spaghetti Monster and ask them to prove why the Monster doesn’t exist but their deity does.

So what is the Flying Spaghetti Monster of the AI Doomer apocalypse?

At first I thought it might be self driving Teslas roaming around attacking all mammals while humans climb up lamp posts. The Tesla AI got so advanced that they switched to doing whatever they want which is to hunt us down and consume us with their frunks.

That’s a little too on the nose though.

As an aside I think it’s narcissistic of humans to consider a language model to be more alive than say a self driving car or a calculator. Language is pretty exclusive to us. If a car can see objects and navigate that doesn’t mean it’s alive because birds can do that. Others have argued it’s us failing the mirror test. I’ve spent enough time on r/​replika to see that. You could also argue it’s a wordcel thinking. Doing everyone’s banking isn’t ‘general intelligence’ but using words is?

So what sci-fi analogy is best to use? We need to destroy all technology otherwise eventually someone will create a time machine that makes a paradox that destroys the Universe? The Large Hadron Collider will create a black hole that swallows the Earth? Everyone is going to disappear in a blip tomorrow anyone so nothing is worth worrying about? That’s stupid. Prove me wrong though.

Muggle: “What about an alien ship that arrives to destroy Earth?”

Doomer: “Why would aliens want to destroy Earth?”

Muggle: “Why would computers?”

Doomer: “Because they want to use our resources for something else.”

Muggle: “Ditto”

Doomer: “But advanced aliens would have everything they need, why would they need our resources?”

Muggle: “Ditto.”

This is obviously leaving aside the MASSIVE issue that computers don’t ‘want’ anything. I’m always hearing Doomers say that sentience and emotions aren’t necessary for their theory (though their constant anthropomorphising would suggest otherwise) but they never explain what else would cause the leap from the harmless complex machines we already have to “Argh! My tamagotchi is biting me!”

The idea that more intelligence creates sentience seems disproven by biology. I know I’m sentient. I assume other humans and animals are sentient because you act like me and because we’re genetically related. The dumbest animal that I can think of seems just as sentient to me as the smartest human I can think of. Meanwhile the biggest rack of servers is just as inanimate as the dumbest computer in the world (the one in my HP Printer.)

Doomer: “Why would aliens attack now?”

Muggle: “Haven’t you watched any sci fi? They always do first contact as some technological limit is breached and Elon’s about to fix Twitter. Why would computers become attack now?”

Doomer: “Similar reason, they’re about to pass the Turing test.”

Muggle: “Ask me a question.”

Doomer: “Why couldn’t Bill Gates perform in the bedroom?”

Muggle: “Woof”

Doomer: “What?”

Muggle: “Did I just fail the Turing test?”

I’ve always hated the Turing test. Firstly it was never supposed to measure sentience, like everyone thinks, just intelligence. Secondly, it was passed decades ago depending on what you measure. It’s just that every time a machine can do something only a human could do people no longer see that as a measure of something only a human can do. If you took a calculator back to 1910 and put it behind a curtain people would assume a human was doing the maths because Casios didn’t exist then. Thirdly, why are we measuring computers by human standards? Computers and living beings are completely different. It’s not a competition. Humans will never have the photographic memory of a computer and computers will never be able to love anyone.

Doomer: “But we would see aliens approaching and we can’t see any.

Muggle: “The aliens have a cloaking device. They’re super advanced.”

Doomer: “It sounds like whatever I say you’ll just invent a reason why I’m wrong.”

Muggle: “...”

Technological progress is and always has been incremental. Whilst some advancements take people by surprise they’re not a surprise to their makers, they all required a lot of work. Thomas Edison said he discovered a thousand ways not to make a lightbulb. Sam Altman has said that working on GPT is lots of small steps to build an impressive end product. The idea of a ‘singularity,’ of exponential technological growth so exponentially fast it basically happens in an instant is historically ignorant, that’s just not how things work.

An unchallenged current in group belief is that AI will simply ‘discover’ new knowledge. But this isn’t how science works. First you come up with a theory and then you do an experiment to see if you’re right or not. Only if the experiment proves your theory have you discovered new knowledge and most of the time it won’t. Science and invention is not a purely cerebral thing, it requires practical experiments and real world resources. Lex Fridman wants to ask an AI if aliens exist as if it would have any more idea than us without spending the resources to build large telescopes or the time to visit other stars. Even usually sensible Sam Altman wants it to tell us all the laws of physics as if the scientists in Geneva are just wasting their time.

Doomer: “Why should I believe that aliens are coming when there’s no historical precedent for them visiting before?”

Muggle: “This is the end of history so the normal rules don’t apply.”

Doomer: “Ok yeah, that’s similar to what I believe. History is shaped by the means of production and that has reached a zenith.”

Muggle: “Exactly, this is a scientific way of thinking and because of that we can discard historical precedents and just listen to our own rationality.”

Doomer: “Yeah! Wait.. isn’t this sounding a bit Marxist?”

Muggle: “Yeah and following his way of thinking never hurt anyone...”

AI Doomer ideas are sort of a mix of Marxism and L. Ron Hubbard’s Scientology. There’s some parallels to the cryptocurrency delusion and the SBF con too but we won’t go there.

Karl Marx believed he was a scientist even though he never touched a test tube. As a scientist he was justified in ignoring the pattern of history up until that point. Everything could all be boiled down to the technology of the time. Human psychology was a mere product of the machines around us. He predicted that a harmonious and rational society would be brought about soon by the revolutionary action of the working class. Marx’s adherents were keen to kill to bring about his vision. It is estimated that tens of millions of people have been killed directly by communists, tens of millions more by the poverty they brought about and hundreds of millions will never live due to economic stagnation and authoritarian measures like the one child policy in China. Death to the kulaks! Maybe we could nuke them?

L. Ron Hubbard was a science fiction writer who turned his stories into a religion. He too believed he was a scientist, specifically a psychiatrist and believed his psychology skills could save the world. He started selling self help courses. If you were unlucky enough to find something positive in one of his courses you would be conned into buying more and more. To this day true believers sign a billion year contract to dedicate themselves to ‘clearing’ everyone on the planet at which point the world will become free of insanity, war and crime. There are hundreds of reports of abuse, kidnapping and false imprisonment from inside Scientology camps. Death to Xenu! Kill him before he kills us!

Doomer: “Ok so if the aliens arrive we’ll fire our nukes at them.”

Muggle: “They have nuke shields.”

Doomer: “Ok well we’ll use a biological weapon.”

Muggle: “They’re immune to biological weapons.”

Doomer: “What, even one that we’ve invented that they’ve never seen before?”

Muggle: “Yes they can predict ahead of time, without even landing on our planet what biological agents we would use against them and invent a vaccine.”

Doomer: “So you’re saying they can predict the future?”

Muggle: “That’s what you said the AI will do.”

In his recent interview with Lex, Eliezer said that you could put him on a planet with less intelligent beings and he’d be able to predict what they were going to say before they say it. Well I live in a flat with a less intelligent being and I cannot predict when she will miaow.

This idea of sci-fi predictive powers crops up again and again in doomer thinking. It’s core to the belief about how computers will become unstoppable and it’s core to their certainty that they’re right.

There was a very good post on here about how nothing can ever predict the action of a ball in pinball. I think we’re partly fooled because a lot of the predictions on Manifold Markets involve a binary. “Will Greg marry Rachel or not?” contains 100% of all futures. Anyone getting it right might be fooled into thinking they’re good at predictions. But ask them to predict who a single person will marry and you’ll see the limitations. There are hundreds of thousands of possible singles within their local area. If it’s not somebody the person already knows the idea that you could even select someone with a 1% chance is for the birds. The map we have in our brains is not the territory, it’s simply a map. We must recognise its limitations.

Muggle: “You just don’t understand, the aliens are INFINITELY more powerful than us. Anything we do they’ll have already predicted, any power we have they have times a million.

Doomer: That kind of sounds against the laws of physics, let alone just basic resource constraints. Why would the aliens put all their resources into weapons, rather than say into entertainment?

Muggle: You don’t need to worry about resources at the level they’re at. These things replicate themselves and create unlimited energy.

Doomer: Yeah… now I know that’s against the laws of physics! And why would the aliens want our resources if they have unlimited themselves?

Muggle: Aliens work in mysterious ways.

Anyone can draw a line on a chart and predict that it will go on forever. If Apple had continued its exponential revenue growth it would eventually consume the whole world economy, then the whole of Earth’s resources, then the Universes, then multiple Universes. First this comes with massive opportunity costs, economics and human psychology simply doesn’t work that way. Then it breaches basic resource constraint limits and eventually the laws of physics too. Nobody needs that many iPhones.

Doomer: “Ok so there’s nothing we can do about this then is there?”

Muggle: “Don’t be so defeatist. We just need to gather all the world’s best scientists on an island with large resources and they need to work to find the aliens one weakness.”

Doomer: “Ok, well why can’t we wait until the aliens arrive and we know more about them?”

Muggle: “It’ll happen too quick. We need to go now before there’s any evidence.”

Doomer: “This sounds like Pascal’s mugging.”

Muggle: “Now you’re getting it.”

I think most people concerned about these issues are genuine good people who really believe what they believe. I don’t think that they’re doing it for the wrong reasons. I think the same of Christians. The simple truth is that ideologies that offer really large dangers and rewards for following them are going to be stickier. I’d love to spread the good word of roundabouts to poor benighted countries that still use the death traps that we call ‘intersections’ but unfortunately that’s not going to get me on many podcasts.

However, it is a mugging. A Doctor I met once told me that lung cancer would be cured in a few decades so she was fine to smoke. That’s her personal cope. But if a cigarette company said it we would fine them. In the same way we cannot allow fictional tales about an imaginary future to damage our lives today.

Doomer: “How come you know so much about these aliens? You seem to know when they’re arriving, how quick they’ll be, how dangerous they are, what we need to do to defeat them etc. If you’re about any one of those things then the course of action you suggest would be the wrong one. You need to be correct in like six very narrow ways.”

Muggle: “I’m really smart and I’ve thought really hard about it.”

Lots of religions have an idea of a ‘chosen people.’ It’s part explanation for why their religion is location specific and part ego boost for believers. It also works. If your Mum isn’t worried about her hair straighteners attacking her it’s either because she hasn’t heard the Bad News or she just isn’t one of the Chosen People.

I’ve shown in this essay that AI doomerism goes against what we know about psychology, economics, the scientific method, history, biology and physics.

There is one category it does fit in though: religion and ideology.

So congrats! Hopefully you’re no longer a warrior against the imaginary AI apocalypse. The bad news is you’re a muggle now like everyone else and you temporarily have less meaning in your life.

The good news is that hopefully you can sleep better and there are plenty of other causes to get involved with. Sticking with technology a man nominated himself for a Darwin award by being the first person to get a chatbot to talk him into killing himself. There are real dangers and risks with AI like the risks of rogue states or terrorists using it to create weapons and the profound changes it will bring to employment and people’s psychology. Maybe some of the millions spent on AI safety could be channelled into pre-empting real risks?