What I learned from the criticism contest

I was a judge on the Criticism and Red-teaming Contest, and read 170 entries. It was overall great: hundreds of submissions and dozens of new points.

Recurring patterns in the critiques

But most people make the same points. Some of them have been made from the beginning, like 2011. You could take that as an indictment of EA’s responsiveness to critics, proof that there’s a problem, or merely as proof that critics don’t read and that there’s a small number of wide basins in criticism space. (We’re launching the EA Bug Tracker to try and distinguish these scenarios, and to keep valid criticisms in sight.[1])

Trends in submissions I saw:

(I took out the examples because it was mean. Can back em up in DMs.)

  • Academics are stuck in 2015. It’s great that academics are writing full-blown papers about EA, and on average I expect this to help us fight groupthink and to bring new ideas in. But almost all of the papers submitted here are addressing a seriously outdated version of EA, before the longtermist shift, before the shift away from public calculation, before the systemic stuff.
    Some of them even just criticise Singer 2009 and assume this is equivalent to criticising EA.
    (I want to single out Sundaram et al as an exception. It is steeped in current EA while maintaining some very different worldviews.)

  • Normalisation. For various reasons, many suggestions would make EA less distinctive. Whether that’s intentional PR skulduggery, retconning a more mainstream cause into the tent, adding epicycles to show that mainstream problem x is really the biggest factor in AI risk, or just what happens when you average intuitions (the mode of a group will reflect current commonsense consensus about causes and interventions, and so not be very EA). This probably has some merit. But if we implemented all of these, we’d be destroyed.

  • Schism. People were weirdly enthusiastic about schisming into two neartermist and longtermist movements. (They usually phrase this as a way of letting neartermist things get their due, but I see this as a sure way to doom it to normalisation instead.)

  • Stop decoupling everything. The opposite mistake is to give up on decoupling, to allow the truism that ‘all causes are connected’ swamp focussed efforts.

  • Names. People devote a huge amount of time to the connotations of different names. But obsessing over this stuff is an established EA foible.

  • Vast amounts of ressentiment. Some critiques are just disagreements about cause prioritisation, phrased hotly as if this gave them more weight.

  • EAs underestimate uncertainty in cause prioritisation. One perennial criticism which has always been true is that most of cause prioritisation, the heart of EA, is incredibly nonobvious and dependent on fiddly philosophical questions.[2] And yet we don’t much act like we knew this, aside from a few GPI economist-philosophers. This is probably the fairest criticism I hear from non-EAs.

Fundamental criticism takes time

Karnofsky, describing his former view: “Most EA criticism is—and should be—about the community as it exists today, rather than about the “core ideas.” The core ideas are just solid. Do the most good possible—should we really be arguing about that?” He changed his mind!

Really fundamental challenges to your views don’t move you at the time you read them. Instead they set dominoes falling; they alter some weights a little, so that the next time the problem comes up in your real life, you notice it and hold it in your attention for a fraction more of a second. And then over about 3 years, you become a different person, - and no trace of the original post remains, and no gratitude will accrue.

If the winners of the contest don’t strike you as fundamental critiques, this is part of why. (The weakness of the judges is another part, but a smaller part than this, I claim. Just wait!)

My favourite example of this is 80k arguing with some Marxists in 2012. We ended up closer than you’d have believed!

My picks

Top for changing my mind

  • Aesthetics as Epistemic Humility.
    I usually view “EA doesn’t have good aesthetics” as an incredibly shallow critique—valuable for people doing outreach but basically not relevant in itself. Why should helping people look good? And have you seen how much most aesthetics cost?

    But this post’s conception is not shallow. Aesthetics as an incredibly important kind of value, to some—and conceivably a unifying frame for more conventionally morally significant values. I still don’t want to allocate much money to this, but I won’t call it frivolous again.

  • EvN on veganism
    van Nostrand’s post is fairly important in itself—she is a talented health researcher, and for your own sake you should heed her. (It will be amazing if she does the blood tests.) But I project much greater importance onto it. Context: I was vegan for 10 years.

    The movement has been overemphasising diet for a long time. This focus on personal consumption is anti-impact in a few ways: the cognitive and health risks we don’t warn people about, the smuggled deontology messing up our decisions, the bias towards mere personal action making us satisfice at mere net zero.

    There is of course a tradeoff with a large benefit to animals and a “taking action /​ sincerity /​ caring /​ sacrificing” signal, but we could maintain our veganism while being honest about the costs to some people. (Way more contentious is the idea of meat options at events as welcoming and counter-dogmatic. Did we really lose great EAs because we were hectoring them about meat when it wasn’t the main topic? No idea, but unlike others she doesn’t harp on about it, just does the science.) As you can see from the email she quotes, this post persuaded me that we got the messaging wrong and plausibly did some harm. (Net harm? Probably not, but cmon.)

Top 5 for improving EA

  • Bad Omens
    This post was robbed (came in just under the prize threshold). But all of you have already read it. I beg you to go look again and take it to heart. Cause-agnostic community building alienates the people we need most, in some areas. CBs should specialise. We probably shouldn’t do outreach with untrained people with a prewritten bottom line.

  • Are you really in a race?
    The apparent information cascade in AI risk circles has been bothering me a lot. Then there’s the dodgy effects of thoughtless broadcasting, including the “pivotal acts” discourse. This was a nice, subtle intervention to make people think a bit on the most important current question in the world.

  • Obviously Froolow and Hazelfire and Lin

  • Effective altruism in the garden of ends
    Alterman’s post is way too long, and his “fractal” idea is incredibly underspecified. Nonetheless he describes how I live, and how I think most of us who aren’t saints should live.

  • Red teaming a model for estimating the value of longtermist interventions

  • The very best criticism wasn’t submitted because it would be unseemly for the author to win.

Top for prose

Top for rigour

Top posts I don’t quite understand in a way which I suspect means they’re fundamental

Top posts I disagree with

  • Vinding on Ord. Disagree with it directionally but Ord’s post is surprisingly weak. Crucial topic too.

  • Zvi. Really impressed with his list of assumptions (only two errors).

  • you can’t do longtermism because the complexity class is too hard. Some extremely bad arguments (e.g. Deutsch on AI) take the same form as this post—appeal to a worst-case complexity class, when this often says very little about the practical runtimes of an algorithm. But I am not confident of this.

  • Private submission with a bizarre view of gain of function research.

  • Sundaram et al

Process

One minor side-effect of the contest: we accidentally made people frame their mere disagreements or iterative improvements as capital-c Criticisms, more oppositional than they maybe are. You can do this for anything—the line between critique and next iteration is largely to do with tone, an expectation of being listened to, and whether you’re playing to a third party audience.

  1. ^

    Here’s a teaser I made in an unrelated repo.

  2. ^

    AI (i.e. not AI alignment) only rises above this because, at this point, there’s no way that it’s not going to have some major impact even if that’s not existential.