“X distracts from Y” as a thinly-disguised fight over group status /​ politics

1. Introduction

There’s a popular argument that says:

It’s bad to talk about whether future AI algorithms might cause human extinction, because that would be a distraction from the fact that current AI algorithms are right now causing or exacerbating societal problems (misinformation, deepfakes, political polarization, algorithmic bias, maybe job losses, etc.)

For example, Melanie Mitchell makes this argument (link & my reply here), as does Blake Richards (link & my reply here), as does Daron Acemoglu (link & a reply by Scott Alexander here & here), and many more.

In Section 2 I will argue that if we try to flesh out this argument in the most literal and straightforward way, it makes no sense, and is inconsistent with everything else these people are saying and doing. Then in Section 3 I’ll propose an alternative elaboration that I think is a better fit.

I’ll close in Section 4 with two ideas for what we can do to make this problem better.

(By “we”, I mean “people like me who are very concerned about future AI extinction risk (x-risk[1])”. That’s my main intended audience for this piece, although everyone else is welcome to listen in too. If you’re interested in why someone might believe that future AI poses an x-risk in the first place, you’re in the wrong place—try here or here.)

2. Wrong way to flesh out this argument: This is about zero-sum attention, zero-sum advocacy, zero-sum budgeting, etc.

If we take the “distraction” claim above at face value, maybe we could flesh it out as follows:

Newspapers can only have so many front-page headlines per day. Lawmakers can only pass so many laws per year. Tweens can only watch so many dozens of TikTok videos per second. In general, there is a finite supply of attention, time, and money. Therefore, if more attention, time, and money is flowing to Cause A (= future AI x-risk), then that means there’s less attention, time and money left over for any other Cause B (= immediate AI problems).

I claim that this is not the type of claim that people are making. After all, if that’s the logic, then the following would be equally sensible:

  • “It’s bad to talk about police incompetence, because it’s a distraction from talking about police corruption.”

  • “It’s bad to talk about health care reform, because it’s a distraction from talking about climate change.”

Obviously, nobody makes those arguments. (Well, almost nobody—see next subsection.)

Take the first one. I think it’s common sense that concerns about police incompetence do not distract from concerns about police corruption. After all, why would they? It’s not like newspapers have decided a priori that there will be one and only one headline per month about police problems, and therefore police incompetence and police corruption need to duke it out over that one slot. If anything, it’s the opposite! If police incompetence headlines are getting clicks, we’re likely to see more headlines on police corruption, not fewer. It’s true that the total number of headlines is fixed, but it’s perfectly possible for police-related articles to collectively increase, at the expense of articles about totally unrelated topics like Ozempic or real estate.

By the same token, there is no good reason that concerns about future AI causing human extinction should be a distraction from concerns about current AI:

  • At worst, they’re two different topics, akin to the silly idea above that talking about health care reform is a problematic distraction from talking about climate change.

  • At best, they are complementary, and thus akin to the even sillier idea above that talking about police corruption is a problematic distraction from talking about police incompetence.

Supporting the latter perspective, immediate AI problems are not an entirely different problem from possible future AI x-risk. Some people think they’re extremely related—see for example Brian Christian’s book. I don’t go as far as he does, but I do see some synergies. For example, both current social media recommendation algorithm issues and future AI x-risk issues are exacerbated by the fact that huge trained ML models are very difficult to interpret and inspect. By the same token, if we work towards international tracking of large AI training runs, it might be useful for both future AI x-risk mitigation and ongoing AI issues like disinformation campaigns, copyright enforcement, AI-assisted spearphishing, etc.

2.1 Side note on Cause Prioritization

I said above that “nobody” makes arguments like “It’s bad to talk about health care reform, because it’s a distraction from talking about climate change”. That’s an exaggeration. Some weird nerds like me do say things kinda like that, in a certain context. That context is called Cause Prioritization, a field of inquiry usually associated these days with Effective Altruism. The whole shtick of Cause Prioritization is to take claims like the above seriously. If we only have so much time in our life and only so much money in our bank account, then there are in fact tradeoffs (on the margin) between spending it to fight for health care reform, versus spending it to fight for climate change mitigation, versus everything else under the sun. Cause Prioritization discourse can come across as off-putting, and even offensive, because you inevitably wind up in a position where you’re arguing against lots of causes that you actually care deeply and desperately about. So most people just reject that whole enterprise. Instead they don’t think explicitly about those kinds of tradeoffs, and insofar as they want to make the world a better place, they tend to do so in whatever way seems most salient and emotionally compelling, perhaps because they have a personal connection, etc. And that’s fine.[2] But Cause Prioritization is about facing those tradeoffs head-on, and trying to do so in a principled, other-centered way.

If you want to do Cause Prioritization properly, then you have to dive into (among other things) a horrific minefield of quantifying various awfully-hard-to-quantify things like “what’s my best-guess probability distribution for how long we have until future x-risk-capable AI may arrive?”, or “exactly how many suffering chickens are equivalently bad to one suffering human?”, or “how do we weigh better governance in Spain against preventing malaria deaths?”.

Anyway, I would be shocked if anyone saying “we shouldn’t talk about future AI risks because it’s a distraction from current AI problems” arrived at that claim via a good-faith open-minded attempt at Cause Prioritization.

Indeed, as mentioned above, there are people out there who do try to do Cause Prioritization analyses, and “maybe future AI will cause human extinction” tends to score right at or near the top of their lists. (Example.)

2.2 Conclusion

So in conclusion, people say “concerns about future AI x-risks distract from concerns about current AI”, but if we flesh out that claim in a superficial, straightforward way, then it makes no sense.

…And that was basically where Scott Alexander left it in his post on this topic (from which I borrowed some of the above examples). But I think Scott was being insufficiently cynical. I offer this alternative model:

3. Better elaboration: This is about zero-sum group status competition

I don’t think anyone is explicitly thinking like the following, but let’s at least consider the possibility that something like this is lurking below the surface:

If we endorse actions to mitigate x-risk from future AIs, we’re implicitly saying “the people who are the leading advocates of x-risk mitigation, e.g. Eliezer Yudkowsky, were right all along.” Thus, we are granting those people status and respect. And thus everything else that those same people say and believe—especially but not exclusively on the topic of AI—implicitly gets more benefit-of-the-doubt.

Simultaneously on the other side, if we endorse actions to mitigate x-risk from future AIs, we’re implicitly saying “the people who are leading advocates against x-risk mitigation, e.g. Timnit Gebru, were wrong all along.” Thus, we are sucking status and respect away from those people. And thus everything else that those people say and believe—especially but not exclusively on the topic of AI—gets some guilt-by association.

Now, the former group of people seem much less concerned about immediate AI concerns like AI bias & misinformation than the latter group. [Steve interjection: I don’t think it’s that simple—see Section 4.2 below—but I do think some people currently believe this.] So, if we take actions to mitigate AI x-risk, we will be harming the cause of immediate AI concerns, via this mechanism of raising and lowering people’s status, and putting “the wrong people” on the nightly news, etc.

Do you see the disanalogy to the police example? The people most vocally concerned about police incompetence, versus the people most vocally concerned about police corruption, are generally the very same people. If we elevate those people as reliable authorities, and let them write op-eds, and interview them on the nightly news, etc., then we are simultaneously implicitly boosting all of the causes that these people are loudly advocating, i.e. we are advancing both the fight against police incompetence and the fight against police corruption.

As an example in the other direction, if a left-wing USA person said:

It’s bad for us to fight endless wars against drug cartels—it’s a distraction from compassionate solutions to drug addiction, like methadone clinics and poverty reduction.

…then that would sound perfectly natural to me! Uncoincidentally, in the USA, the people advocating for sending troops to fight drug cartels, and the people advocating for poverty reduction, tend to be political adversaries on almost every other topic!

4. Takeaways

4.1 Hey AI x-risk people, let’s make sure we’re not pointlessly fanning these flames

As described above, there is no good reason that taking actions to mitigate future AI x-risk should harm the cause of solving immediate AI-related problems; if anything, it should be the opposite.

So: we should absolutely, unapologetically, advocate for work on mitigating AI x-risk. But we should not advocate for work on mitigating AI x-risk instead of working on immediate AI problems. That’s just a stupid, misleading, and self-destructive way to frame what we’re hoping for. To be clear, I think this kind of weird stupid framing is already very rare on “my side of the aisle”—and far outnumbered by people who advocate for work on x-risk and then advocate for work on existing AI problems in the very next breath—but I would like it to be even rarer still.

(I wouldn’t be saying this if I didn’t see it sometimes; here’s an example of me responding to (what I perceived as) a real-world example on twitter.)

In case the above is not self-explanatory: I am equally opposed to saying we should work on mitigating AI x-risk instead of working on the opioid crisis, and for the same reason. Likewise, I am equally opposed to saying we should fight for health care reform instead of fighting climate change.

I’m not saying that we should suppress these kinds of messages because they make us look bad (although they obviously do); I’m saying we should suppress these kinds of messages because they are misleading, for reasons in Section 2 above.

To make my request more explicit: If I’m talking about how to mitigate x-risk, and somebody changes the subject to immediate AI problems that don’t relate to x-risk, then I have no problem saying “OK sure, but afterwards let’s get back to the human extinction thing we were discussing before….” Whereas I would not say “Those problems you’re talking about are much less important than the problems I’m talking about.” Cause Prioritization is great for what it is, but it’s not a conversation norm. If someone is talking about something they care about, it’s fine if that thing isn’t related to alleviating the maximum amount of suffering. That doesn’t give you the right to change the subject. Notice that even the most ardent AI x-risk advocates seem quite happy to devote substantial time to non-cosmologically-impactful issues that they care about—NIMBY zoning laws are a typical example. And that’s fine!

Anyway, if we do a good job of making a case that literal human extinction from future AI is a real possibility on the table, then we win the argument—the Cause Prioritization will take care of itself. So that’s where we need to be focusing our communication and debate. Keep saying: “Let’s go back to the future-AI-causing-human-extinction thing. Here’s why it’s a real possibility.” Keep bringing the discussion back to that. Head-to-head comparisons of AI x-risk versus other causes tend to push discussions away from this all-important crux. Such comparisons would be a (ahem) distraction!

4.2 Shout it from the rooftops: There are people of all political stripes who think AI x-risk mitigation is important (and there are people of all political stripes who think it’s stupid)

Some people have a strong opinion about “silicon valley tech people”—maybe they love them, or maybe they hate them. Does that relate to AI x-risk discourse? Not really! Because it turns out that “silicon valley tech people” includes many of the most enthusiastic believers in AI x-risk (e.g. see the New York Times profile of Anthropic, a leading AI company in San Francisco) and it also includes many of its most enthusiastic doubters (e.g. tech billionaire Marc Andreessen: “The era of Artificial Intelligence is here, and boy are people freaking out. Fortunately, I am here to bring the good news: AI will not destroy the world…”).

Likewise, some people have a strong opinion (one way or the other) about “the people extremely concerned about current AI problems”. Well, it turns out that this group likewise includes both enthusiastic believers in future AI x-risk (e.g. Tristan Harris) and enthusiastic doubters (e.g. Timnit Gebru).

By the same token, you can find people taking AI x-risk seriously in Jacobin magazine on the American left, or on Glenn Beck on the American right; in fact, a recent survey of the US public got supportive responses from Democrats, Republicans, and Independents—all to a quite similar extent—to questions about AI extinction risk being a global priority.[3]

I think this situation is good and healthy, and I hope it lasts, and we should try to make it widely known. I think that would help fight the “X distracts from Y” objection to AI x-risk, in a way that complements the kinds of direct, object-level counterarguments that I was giving in Section 2 above.

(Also posted on lesswrong)

  1. ^

    There are fine differences between “extinction risk” and “x-risk”, but it doesn’t matter for this post.

  2. ^

    Sometimes I try to get people excited about the idea that they could have a very big positive impact on the world via incorporating a bit of Cause Prioritization into their thinking. (Try this great career guide!) Sometimes I even feel a bit sad or frustrated that such a tiny sliver of the population has any interest whatsoever in thinking that way. But none of that is the same as casting judgment on those who don’t—it’s supererogatory, in my book. For example, practically none of my in-person friends have heard of Cause Prioritization or related ideas, but they’re still great people who I think highly of.

  3. ^

    Party breakdown results were not included in the results post, but I asked Jamie Elsey of Rethink Priorities and he kindly shared those results. It turns out that the support /​ oppose and agree /​ disagree breakdowns were universally the same across the three groups (Democrats, Independents, Republicans) to within at most 6 percentage points. If you look at the overall plots, I think you’ll agree that this counts as “quite similar”.