Critiques of EA that I want to read

Note — I’m writing this in a personal capacity, and am not representing the views of my employer.

I’m interested in the EA red-teaming contest as an idea, and there are lots of interesting critiques I’d want to read. But I haven’t seen any of those written yet. I put together a big list of critiques of EA I’d be really interested in seeing come out of the contest. I personally would be interested in writing some of these, but don’t really have time to right now, so I am hoping that by sharing these, someone else will write a good version of them. I’d also welcome people to share other critiques they’d be excited to see written in the comments here!

I think that if someone wrote all of these, there are many where I wouldn’t necessarily agree with the conclusions, but I’d be really interested in the community having a discussion about each of them and I haven’t seen that discussion happen before.

If you want to write any of these, I’m happy to give brief feedback on it, or give you a bunch of bullet-points of my thoughts on them.

Critiques of EA

  • Defending person-affecting views

    • Person-affecting views try to capture an intuition that is something like, “something can only be bad if it is bad for some particular person or group of people.”

      • This specifically interacts with an argument for reducing existential risk that presents reducing x-risk as good because then future people will be born and have good lives. If you take a person-affecting view seriously, maybe you think it is no big deal that people won’t be born, since they aren’t any particular people, and thus not being born is not bad for them.

      • Likewise, some forms of person-affecting views are generally neutral toward adding additional happy people, while a non-person-affecting total utilitarian view is in favor of adding additional happy people, all other things being neutral.

      • People in the EA community have critiqued person-affecting views for a variety of reasons related to the nonidentity problem

    • Person-affecting views are interesting, but pretty much universally dismissed in the EA community. However, I think a lot of people find the intuitions behind person-affecting views to be really powerful. And, if there was a convincing version of a person-affecting view, it probably would change a fair amount of longtermist prioritization.

    • I’d really love to see a strong defense of person-affecting views, or a formulation of a person-affecting view that tries to address critiques made of them. This seems like a fairly valuable contribution to making EA more robust. I think I’d currently bet on the community not realigning in light of this defense, but it seems worth trying to make because the intuitions powering person-affecting views are compelling.

  • It’s getting harder socially to be a non-longtermist in EA.

    • People seem to push back on the idea that EA is getting more longtermist by pointing at the increase in granting in the global health and development space.

    • Despite this, a lot of people seem to have a sense that EA is pushing toward longtermism.

    • I think that the argument that EA is becoming more longtermist is stronger than doubters give it credit for, but it might mostly have to do with social dynamics in the space.

      • The intellectual leaders /​ community building efforts seem to be focused on longtermism.

      • The “energy” of the space seems mostly focused on longtermist projects.

      • People join EA interested in neartermist causes, and gradually become more interested in longtermist causes (on average)

    • I think that these factors might be making it socially harder to be a non-longtermist who engages with the EA community, and that is an important and missing part of the ongoing discussion about EA community norms changing.

  • EA is neglecting trying to influence non-EA organizations, and this is becoming more detrimental to impact over time.

    • I’m assuming that EA is generally not missing huge opportunities for impact.

    • As time goes on, theoretically many grants /​ decisions in the EA space ought to be becoming more effective, and closer to what the peak level of impact possible might be.

    • If this is the case, changing EAs’ minds is becoming less effective, because the possible returns on changing their views are lower.

    • Despite this, it seems like relatively little effort is put into changing the minds of non-EA funders, and pushing them toward EA donation opportunities, and a lot more effort is put into shaping the prioritization work of a small number of EA thinkers.

    • If this is the case, it seems like a non-ideal strategy for the EA research community — the possible returns on changing the prioritization of EA thinkers are fairly small.

  • The fact that everyone in EA finds the work we do interesting and/​or fun should be treated with more suspicion.

    • It’s exciting and interesting to work on lots of EA topics. This should make us mildly suspicious that they are as important as we think.

    • I’ve worked professionally in EA and EA-adjacent organizations since around 2016, and the entire time, I’ve found my work really really interesting.

    • I know a lot of other people who find their work really really interesting.

    • I’m pretty confident that what I find interesting is not that likely to overlap with what’s most important.

    • It seems pretty plausible from this that I’m introducing fairly large biases into what I do because of what I find interesting, and missing a lot of opportunities for impact.

    • It seems plausible that this is systematically happening across the EA space.

  • Sometimes change takes a long time — EA is poorly equipped to handle that

    • I’ve been involved in the EA space and adjacent communities for around 8 or so years, and throughout that time, it feels like the space has changed dramatically.

    • But some really important projects probably take a long time to demonstrate progress or results.

    • If the community is going to continue changing and evolving rapidly, it seems like we are not equipped to do these projects.

    • There are some ways to address this (e.g. giving endowments to charities so they can operate independently for longer), but these seem underexplored in the EA space.

  • Alternative models for distributing funding are probably better and are definitely under-explored in EA

    • Lots of people in EA buy into the idea that groups of people make better decisions than individuals, but all our funding mechanisms are built around a small number of individuals making decisions.

      • The FTX regranting program is a counterexample to this (and a good one), but still is fundamentally not that transformative, and only slightly improves the number of people making decisions about funding.

    • There are lots of alternative funding models that could be explored more, and should be!

      • Distribute funds across EA Funds by the number of people donating to each cause, or by people’s aggregate weighting of causes, instead of total donations (thus getting a fund distribution that represents priorities of the community instead of wealth).

      • Play with projects like Donation Democracy on a larger scale.

      • Trial consensus mechanisms for distributing fundings with large groups of donors (likely moderating against very unusual but good grants, but improving average grant quality).

      • Pursue more active grantmaking of ideas that seem promising (not really using group decision making but still a different funding approach).

  • EA funder interactions with charities often make it harder to operate an EA charity than it has to be

    • I’ve worked at several EA and non-EA charities, and overall, the approach to funding in the EA space is vastly better than the non-EA world. But it still isn’t ideal, and lots of problems happen.

      • Sometimes funders try to play 5d chess with each other to avoid funging each other’s donations, and this results in the charity not getting enough funding.

      • Sometimes funders don’t provide much clarity on the amount of time they intend to fund organizations for, which makes it harder to operate the organization long-term or plan for the future.

      • Lots of EA funding mechanisms seem basically based on building relationships with funders, which makes it much harder to start a new organization in the space if you’re an outsider.

        • Relatedly, it’s harder to build these relationships without knowing a large EA vocabulary, which seems bad for bringing in new people.

    • These interactions seem addressable through funders basically thinking less about how other funders are acting, and also working on longer time-horizons with grants to organizations.

  • RFMF doesn’t make much sense in many of the contexts it’s used in

    • Room For More Funding makes a lot of sense for GiveWell-style charity evaluation. It’s answering a straightforward question like, “how much more money could this charity absorb and continue operating at X level of cost-effectiveness with this very concrete intervention.”

    • But this is not how charities that don’t do concrete interventions operate, and due to some historical reasons (like GiveWell using this term), people often ask these charities about RFMF.

    • Charities estimating their own “RFMF” probably mean a variety of different things, including:

      • How much money they need to keep operating for another year at their current level

      • How much money they could imagine spending over the next year

      • How much money they could imagine spending over XX years

      • How much money a reasonable strategic plan would cost over some time period

    • We need a more precise language for talking about the funding needs of research or community-building organizations, so that people can interpret these figures accurately.

  • Suffering-focused longtermism stuff seems weirdly sidelined

    • S-risks seem like at least a medium-big deal, but seem to be treated as basically not important.

    • Lots of people in the EA space seem to believe that a large portion of future minds will be digital (e.g. here is a leader in the EA space saying they think there is an ~80% chance of this).

    • If this happens, it seems totally reasonable to give some credence to worlds where lots of digital minds suffer a lot.

      • This possibility seems to be taken seriously by only a few organizations (e.g. Center on Long-term Risk) but basically seems like a fringe position, and doesn’t seem represented at major grantmakers and community-building organizations.

    • I think this is probably because these views are somewhat associated with really strong negative utilitarianism, but they seem also very concerning for total utilitarians.

    • It seems bad to have sidelined these perspectives, and s-risks probably should be explored more, or at least discussed more openly (especially from a non-negative utilitarian perspective).

  • Logical consistency seems underexplored in the EA space

    • A big core and unstated premise for an EA approach is that ethics should be consistent.

    • I don’t think I have particularly good reasons for thinking ethics should be consistent, especially if I adopt a moral realism that seems somewhat popular in the EA space.

    • It seems like there are plausible reasons for explaining why I think ethics should be consistent that don’t have much to do with morality (e.g. maybe logic is an artifact of the way human languages are structured, etc.).

    • I’d be interested in someone writing a critique of the idea that ethics have to be consistent, as it seems like it underpins a lot of EA thinking.

      • There is a lot of philosophical critique of moral particularism, but I think that EA cases are interesting both because of the high degree of interest in moral realism, and because EA explicitly acts on what might be only thought experiments in other contexts (like very low-likelihood, high-EV interventions).

      • If the result of a critique is that consistency is actually fairly important, that seems like it would have ramifications for the community (consistency seems to be assumed to be important, but a lot of decisions don’t seem particularly consistent).

  • People are pretty justified in their fears of critiquing EA leadership/​community norms

    • It’s actually probably bad for your career to publicly critique EA leadership, or to write some critiques in response to this contest, but the community seems to want to pretend that it isn’t in the framing of these invitations for critique.

    • I think I have a handful of critiques I want to make about EA that I am fairly certain would negatively impact my career to voice, even though I believe they are good faith criticisms, and I think engaging with them would strengthen EA.

      • I removed a handful of items from this list because I basically thought they’d be received too negatively, and that could harm me or my employer.

        • I think everything I removed was fairly good faith and important as a critique.

        • I’m making a list of critiques I want to see, not even critiques I necessarily believe, but still felt like I had to remove things.

        • I think it is bad that I felt the need to remove anything, but I also think it was the right decision.

    • I’ve heard anecdotes about people posting what seem like reasonable critiques and being asked to take them down by leadership in the EA space (for reasons like “they are making a big deal out of something not that important”).

    • I think that this is a bad dynamic, and has a lot to do with the degree of power imbalances in the EA space, and how socially and intellectually tight-knit EA leadership is.

    • This seems like a dynamic that should be discussed openly and critiqued.

  • Grantmakers brain-drain organizations — is this good?

    • My impression of the EA job market is that people consider jobs at grantmakers to be the highest status.

    • The best paying jobs in EA (maybe besides some technical roles), are at grantmakers.

    • This probably causes some degree of “brain-drain” where grantmakers are able to get the most talented researchers.

    • This seems like it could have some negative effects for organizations that are bad for the community.

      • Grantmakers are narrowly focused on short-term decisions (“issue this grant or not?”), rather than doing longer-term or exploratory research.

        • This means the most skilled researchers are taken away from exploratory questions to short-term questions, even if the skilled researcher might prioritize the exploratory research over the short-term questions.

      • Grantmakers tend to be relatively secretive /​ quiet about their decision-making and thinking, so the research of the best researchers in the community often isn’t shared more widely (and thus can’t be more widely adopted).

    • If this dynamic is net-negative (it has good effects too, like grantmakers making better grants!), then addressing it seems pretty important.

  • Effective animal advocacy mostly neglects the most important animals

    • The effective animal advocacy movement has focused mostly on welfare reforms for laying hens and broiler chickens for the last several years.

    • I think this is probably partially for historical reasons — the animal advocacy movement was already a bit focused on welfare reforms, and of the interventions being pursued at the time it seemed like the most promising.

    • I think that it is possible this approach has entirely missed a ton of impact for other farmed animals (especially fish and insects) and wild animals, and that prioritizing these other animals from the beginning could have been a much more effective use of that funding, even if new organizations needed to be formed, etc.

      • I think in particular not working on insect farming over the last decade may come to be one of the largest regrets of the EAA community in the near future.

    • This dynamic probably will continue to play out to some extent, and it seems like it could be important to address it sooner rather than later.

      • Large organizations in the space seem focused only on specific strategies and specific animals, and priorities over the next few years are still on laying hens and broilers.