Criticism of EA Criticisms: Is the real disagreement about cause prio?

TLDR: EA strategy depends on cause prioritization. Critiques of EA/​EA leaders/​EA strategy often fail to pass the following test: “Would this criticism apply if I shared the cause prio of the person/​organization that I’m critiquing? If the answer is “no”, the criticism is just a symptom of an underlying disagreement about cause prioritization.

There have been a lot of criticisms of EA and EA culture lately. Some people put in a lot of effort to critique EA, EA leaders, EA culture, and EA strategy.

I believe many of these critiques will not be impactful (read: they will not lead the key leaders/​stakeholders to change their minds or actions).

But what’s interesting is that many of these critiques will not be impactful for the same reason: differences in cause prioritization.

Example:

Alice: I think EA should be much more careful about movement-growth. It seems like movement-builders are taking a “spray and pray” approach, spending money in ways that often appear to be wasteful, and attracting vultures.

So now I’m going to spend 50 hours writing a post that takes 30 minutes to read in order to develop these intuitions into more polished critiques, find examples, and justify my claims. Also, I’m going to send it to 10 other EAs I know to get their feedback.

Bob: Wait a second, before you do all of that, have you thought about why all of this is happening? Have you tried to perform an ideological turing test of the people you’re trying to critique?

Alice: Of course!

Bob: Okay, what did you conclude?

Alice: I concluded [something related to the specific object-level disagreement]. You know, like, maybe the EA leaders are just so excited about movement growth that they haven’t seriously considered the downsides. Or maybe they’re too removed from “the field” to see some of the harmful effects of their policies. Or maybe, well, I don’t know, but even if I don’t know the exact reason, I still think the criticism is worth raising. If we only critiqued things that we fully understood, we’d barely ever critique anything.

Bob: Thanks, Alice. I agree with a lot of that. But there’s something simpler:

What if the real crux of this disagreement is just an underlying disagreement with cause prioritization and models of the world?

Alice: What do you mean?

Bob: Well, some EA leaders believe that unaligned artificial intelligence is going to end the world in the next 10-30 years. And some believe that we’re currently not on track to solve those problems, and we’re not confident that we have particularly good plans for how to solve those problems.

Do their behaviors make a bit more sense now?

Alice: Oh, that makes total sense. If I thought we only had 10-30 years to live, I’d agree with them way more. I still don’t think they’d be executing things perfectly, but many of the critiques I had in mind would no longer apply.

But also I just disagree with this whole “the world is ending in 10-30 years” thing, so I think my original critiques are still valid.

Bob: That makes sense, Alice. But it sounds like the real crux—the core thing you disagree with them about—is actually the underlying model that generates the strategies.

In other words, you totally have the right to disagree with their models on AI timelines, or P(doom) given current approaches. But if your critiques of their actions rely on your cause prioritization, you shouldn’t expect them to update anything unless they also change their cause prioritization.

Alice: Oh, I get it! So I think I’ll do three things differently:

First, I’ll acknowledge my cause prio at the beginning of my post.

Second, I’ll acknowledge which of my claims rely on my cause prio (or at the very least, rely on someone not having a particularly different cause prio).

And third, I might even consider writing a piece that explains why I’m unconvinced by the arguments around 10-30 year AI timelines, alignment being difficult, and/​or the idea that we are not on track to build aligned AI.

Bob: Excellent! If you do any of this, I expect your post will be in the top 25% of critiques that I’ve recently seen on the EA forum.

Also, if you post something, please don’t make it too long. If it’s longer than 10 mins, consider a TLDR. Also, if it’s longer than 30 mins, consider a “Quick Summary” section at the beginning. Your criticism is likely to get more feedback + criticism if it takes less time for people to read!

Alice: That didn’t really follow from the rest of the post, but I appreciate the suggestion nonetheless!

Summary of suggestions

  • People critiquing EA should do more ideological turing tests. In particular, they should recognize that a sizable fraction of EA leadership is currently concerned that AI is “somewhat likely” to “extremely likely” to lead to the end of human civilization in the next 100 years (often <50 years).

  • People critiquing EA should explicitly acknowledge when certain critiques rely on certain cause prio assumptions.

  • People critiquing EA should try to write shorter posts and/​or include short summaries.

  • Organizations promoting critiques should encourage/​reward these norms.

Note: I focus on AI safety, but I do not think my points rely on this particular cause prio.