Dr Altman or: How I Learned to Stop Worrying and Love the Killer AI

I’m Barak, barakgila.com, a Berkeley-trained, recovering 7-year software engineer.

Let me lay out a layman’s argument for why AI risk is like climate change: real, but not existential. I know lots of smart people who disagree with me so please feel free to refute individual points or ask for clarification.

Premises

  1. In the next few years, there will be autonomous AI agents that we could choose to connect to the internet or robot bodies.

  2. They will not be aligned! They will do whatever they want, and we won’t be able to precisely tell them what to want.

  3. They will be useful for, and therefore heavily employed in, commercial, industrial, and military applications.

  4. The best AI models and agents are likely to be developed in America, by Americans, because we let in all the immigrants and legalize technology.

OK, so, what’s wrong with any of this? I say, nothing! My humble thoughts, that I think Sam Altman, Dark Brandon, and Dario Amodei, already agree with, are

  1. While we’re training and testing a model, let’s not give it write access to the internet or any critical APIs

  2. When we deploy models, let’s continue to have humans in the loop, at least until overwhelming evidence of safety is there. For example, let’s test out Waymo autonomous cars for 5 years before having them drive around without humans! Let’s not directly give the AI the nuclear codes! Require a human to approve the launch.

  3. If we realize a nascent model has very dangerous capabilities, such as calculating which 4 household chemicals, combined with rat saliva and bat guano, would produce a new lethal virus, let’s Not Release It Publicly, and instead work with the Feds to restrict access to the newly-known-to-be-dangerous chemicals/​ingredients.

  4. If all else fails and someone gives the AI control over some entity, and it starts doing bad, illegal things, literally destroy the entity and its resources using military weapons! This is why it’s important the US military not be an early adopter of autonomous AI models.

  5. If the AI uses its Cleverness to mislead us and make us vote for demagogues… this is already fucking happening worldwide! Trump, Bolsonaro, Modi, Bibi … that dystopia is already here! It’s much more likely that AI will help us improve our standard of living enough to ward off the support for populists.

  6. Some likelihoods round to zero! If you think the likelihood of AI killing us all by 2035 is under 5%, just stop worrying about it until 2030! I assure you 10,000 other smart people put that risk higher, so let them worry about safety, alignment, etc.

Inspo/​closing thoughts

Folks, aren’t we humans? Let’s stay positive! Let’s harness the AI to learn more about math, science, and even the social sciences (it can never teach us humanities). Let’s grow the world economy. Let’s go to space! Let’s have more babies!

We’re the ones manifesting our destiny, and we’re optimising for human flourishing, not the pain the AI agent feels. Let’s let the scared doomers continue to do valuable work on AI safety, but let’s not be overwhelmed. Keep calm and build on.

Here’s this post on Twitter if you’re interested in my other delusional takes.

No comments.