Three camps in AI x-risk discussions: My personal very oversimplified overview

[I originally wrote this as a Facebook post, but I’m cross-posting here in case anybody finds it useful.]

Here’s my current overview of the AI x-risk debate, along with a very short further reading list:

At a *very* overly simplified but I think still useful level, it looks to me like there are basically three “camps” for how experts relate to AI x-risks. I’ll call the three camps “doomers”, “worriers”, and “dismissers”. (Those terms aren’t original to me, and I hope the terminology doesn’t insult anybody—apologies if it does.)

1) Doomers: These are people who think we are almost certainly doomed because of AI. Usually this is based on the view that there is some “core” or “secret sauce” to intelligence that for example humans have but chimps don’t. An AI either has that kind of intelligence or it doesn’t—it’s a binary switch. Given our current trajectory it looks entirely possible that we will at some point (possibly by accident) develop AIs with that kind of intelligence, at which point the AI will almost immediately become far more capable than humans because it can operate at digital speeds, copy itself very quickly, read the whole internet, etc. On this view, all current technical alignment proposals are doomed to fail because they only work on AIs without the secret sauce, and they’ll completely fall apart for AIs with the secret sauce because those AIs will be fundamentally different than previous systems. We currently have no clue how to get a secret-sauce-type-AI to be aligned in any way, so it will almost certainly be misaligned by default. If we suddenly find ourselves confronted with a misaligned superintelligence of this type, then we are almost certainly doomed. The only way to prevent this given the state of current alignment research is to completely stop all advanced AI research of the type that could plausibly lead to secret-sauce-type-AGIs until we completely solve the alignment problem.

People in this camp often have very high confidence that this model of the world is correct, and therefore give very high estimates for “P(doom)”, often >95% or even >99%. Prominent representatives of this view include Eliezer Yudkowsky and Connor Leahy.

For a good, detailed presentation of this view, see An artificially structured argument for expecting AGI ruin by Rob Bensinger.

[EDIT: Another common reason for being a Doomer is if you have really short timelines (i.e., you think we’re going to hit AGI very soon), by default you think it’ll be misaligned and take over, and because of short timelines you think we won’t have time to figure out how to prevent this. You could of course also be a Doomer if you are just very pessimistic that humanity will solve the alignment problem even if we do have more time. But my impression is that most Doomers have such high P(doom) estimates mainly because they have very short timelines and/​or because they subscribe to something like the secret sauce of intelligence theory.]

2) Worriers: These people often given a wide variety of reasons for why very advanced AI might lead to existential catastrophe. Reasons range from destabilizing democracy and the world order, to enabling misuse by bad actors, to humans losing control of the world economy, to misaligned rogue AIs deliberately taking over the world and killing everybody. Worriers might also think that the doomer model is entirely plausible, but they might not be as confident that it is correct.

Worriers often give P(doom) estimates ranging anywhere from less than 0.1% to more than 90%. Suggestions for what to do about it also vary widely. In fact, suggestions vary so widely that they often contradict each other: For example, some worriers think pushing ahead with AGI research is the best thing to do, because that’s the only way they think we can develop the necessary tools for alignment that we’ll need later. Others vehemently disagree and think that pushing ahead with AGI research is reckless and endangers everybody.

I would guess that the majority of people working on AGI safety or policy today fall into this camp.

Further reading for this general point of view:

- Hendrycks, et al, An Overview of Catastrophic AI Risks

- Yoshua Bengio, FAQ on Catastrophic AI Risks

(Those sources have lots of references you can look up for more detail on particular subtopics.)

3) Dismissers: People in this camp say we shouldn’t worry at all about AGI x-risk and that it shouldn’t factor at all into any sort of policy proposals. Why might someone say this? Here are several potential reasons:

a) AGI of the potentially dangerous type is very far away (and we are very confident of this), so there’s no point doing anything about it now. See for example this article.

b) The transition from current systems to the potentially dangerous type will be sufficiently gradual that society will have plenty of time to adjust and take the necessary steps to ensure safety (and we are very confident of this).

c) Alignment /​ control will be so easy that it’ll be solved by default, no current interventions necessary. Yann LeCunn seems to fall into this category.

d) Yeah maybe it’s potentially a big problem, but I don’t like any of the proposed solutions because they all have tradeoffs and the proposed solutions are worse than the problems they seek to address. I think a lots of dismissers fall into this category, including for example many of those who argue against any sort of government intervention on principle, or people who say that focusing on x-risk distracts from current harms.

e) Some people seem to have a value system where actually AGI taking over and maybe killing everybody isn’t actually such a bad thing because it’s the natural evolution of intelligence, or something like that.

There are also people who claim that they have an epistemology where they only ever worry about risks that are rigorously based in lots of clear scientific evidence, or something along those lines. I don’t understand this perspective at all though, for reasons nicely laid out by David Krueger here.

Part of my frustration with the general conversation on this topic is that people on all sides of the discussion often seem to talk past each other, use vague arguments, or (frequently) opt for scoring rhetorical points for their team over actually stating their views or making reasoned arguments.

For a good overview of the field similar to this post but better written and with a bit more on the historical background, see A Field Guide to AI Safety by Kelsey Piper.

If you want to get into more detail on any of this, check out stampy.ai or any of these free courses:

- ML Safety

- AI Safety Fundamentals—Alignment

- AI Safety Fundamentals—Governance

Crossposted to LessWrong (21 points, 0 comments)