Doing EA Better: Preamble, Summary, and Introduction

Preamble to the Preamble

Last week we published a post called “Doing EA Better”, which argued that EA’s new-found power and influence obligates us to solve our movement’s significant problems with respect to epistemics, rigour, expertise, governance, and power.

We have since received (and responded to) emails from very high-profile members of the movement, and we commend their willingness to discuss concrete actions.

As mentioned in that post’s preamble, we are splitting DEAB up into a sequence to facilitate object-level discussion. This is the first of these, covering the introductory sections.

Each post will include the relevant parts of the list of suggested reforms. There isn’t a perfect correspondence between the subheadings of the post and the reforms list, so not all reforms listed will be 100% relevant to the section in question.

As well as the sequence, there will be a post about EA’s indirect but non-negligible links to reactionary thought, as well as a list of recommended readings.

Finally, we have tried (imperfectly) to be reasonably precise in our wording, and we ask that before criticising an argument of ours, commenters ensure that it is an argument that we are in fact making.

Preamble

It’s been a rough few months, hasn’t it?

Recent events, including the FTX collapse and the Bostrom email/​apology scandal, have led a sizeable portion of EAs to become disillusioned with or at least much more critical of the Effective Altruism movement.

While the current crises have made some of our movement’s problems more visible and acute, many EAs have become increasingly worried about the direction of EA over the last few years. We are some of them.

This document was written collaboratively, with contributions from ~10 EAs in total. Each of us arrived at most of the critiques below independently before realising through conversation that we were not “the only one”. In fact, many EAs thought similarly to us, or at least were very easily convinced once thoughts were (privately) shared.

Some of us started to become concerned as early as 2017, but the discussions that triggered the creation of this post happened in the summer of 2022. Most of this post was written by the time of the FTX crash, and the final draft was completed the very day that the Bostrom email scandal broke.[1] Thus, a separate post will be made about the Bostrom/​FLI issues in around a week in more than around a week.

A lot of what we say is relevant to the FTX situation, and some of it isn’t, at least directly. In any case, it seems clear to us that the FTX crisis significantly strengthened our arguments.

We reached the point where we would feel collectively irresponsible if we did not voice our concerns some time ago, and now seems like the time where those concerns are most likely to be taken seriously. We voice them in the hope that we can change our movement for the better, and have taken pains to avoid coming off as “hostile” in any way.

Experience indicates that it is likely many EAs will agree with significant proportions of what we say, but have not said as much publicly due to the significant risk doing so would pose to their careers, access to EA spaces, and likelihood of ever getting funded again.

Naturally the above considerations also apply to us: we are anonymous for a reason.

This post is also quite very long, so each section has a summary at the top for ease of scanning, and we’ll break this post up into a sequence to facilitate object-level discussion.

Finally, we ask that people upvote or downvote this post on the basis of whether they believe it to have made a useful contribution to the conversation, rather than whether they agree with all of our critiques.

Summary

  • The Effective Altruism movement has rapidly grown in size and power, and we have a responsibility to ensure that it lives up to its goals

  • EA is too homogenous, hierarchical, and intellectually insular, with a hard core of “orthodox” thought and powerful barriers to “deep” critiques

  • Many beliefs accepted in EA are surprisingly poorly supported, and we ignore entire disciplines with extremely relevant and valuable insights

  • Some EA beliefs and practices align suspiciously well with the interests of our donors, and some of our practices render us susceptible to conflicts of interest

  • EA decision-making is highly centralised, opaque, and unaccountable, but there are several evidence-based methods for improving the situation

Introduction

As committed Effective Altruists, we have found meaning and value in the frameworks and pragmatism of the Effective Altruism movement. We believe it is one of the most effective broadly-focused social movements, with the potential for world-historical impact.

Already, the impact of many EA projects has been considerable and inspiring. We appreciate the openness to criticism found in various parts of the EA community, and believe that EA has the potential to avoid the pitfalls faced by many other movements by updating effectively in response to new information.

We have become increasingly concerned with significant aspects of the movement over our collective decades here, and while the FTX crisis was a shock to all of us, we had for some time been unable to escape the feeling that something was going to go horribly wrong.

To ensure that EA has a robustly positive impact, we feel the need to identify the aspects of our movement that we find concerning, and suggest directions for reform that we believe have been neglected. These fall into three major categories:

  1. Epistemics

  2. Expertise & Rigour

  3. Governance & Power

We do not believe that the critiques apply to everyone and to all parts of EA, but to certain – often influential – subparts of the movement. Most of us work on existential risk, so the majority of our examples will come from there.[2]

Not all of the ~10 people that helped to write this post agree with all the points made within, both in terms of “goes too far” and “doesn’t go far enough”. It is entirely possible to strongly reject one or more of our critiques while accepting others.

In the same vein, we request that commenters focus on the high-level critiques we make, rather than diving into hyper-specific debates about one thing or another that we cited as an example.

Finally, this report started as a dozen or so bullet points, and currently stands at over 20,000 words. We wrote it out of love for the community, and we were not paid for any of its writing or research despite most of us either holding precarious grant-dependent gig jobs or living on savings while applying for funding. We had to stop somewhere. This means that many of the critiques we make could be explored in far, far more detail than their rendition here contains.

If you think a point is underdeveloped, we probably agree; we would love to see others take the points we make and explore them in greater depth, and indeed to do so ourselves if able to do so while also being able to pay rent.

We believe that the points we make are vital for the epistemic health of the movement, that they will make it more accessible and effective, and that they will enhance the ability of EA as a whole to do the most good.

Two Notes:

  1. Some of the issues we describe are based on personal experience and thus cannot be backed by citations. If you doubt something we assert, let us know and we’ll give as much detail as we can without compromising our anonymity or that of others. You can also just ask around: we witnessed most of the things we mention on multiple independent occasions, so they’re probably not rare.

  2. This post ties a lot of issues together and is thus necessarily broad, so we will have to make some generalisations, to which there will be exceptions.

Suggested reforms

Below, we have a preliminary non-exhaustive list of relevant suggestions for structural and cultural reform that we think may be a good idea and should certainly be discussed further.

It is of course plausible that some of them would not work; if you think so for a particular reform, please explain why! We would like input from a range of people, and we certainly do not claim to have all the answers!

In fact, we believe it important to open up a conversation about plausible reforms not because we have all the answers, but precisely because we don’t.

Italics indicates reforms strongly inspired by or outright stolen from Zoe Cremer’s list of structural reform ideas. Some are edited or merely related to her ideas; they should not be taken to represent Zoe’s views.

Asterisks (*) indicate that we are less sure about a suggestion, but sure enough that we think they are worth considering seriously, e.g. through deliberation or research. Otherwise, we have been developing or advocating for most of these reforms for a long time and have a reasonable degree of confidence that they should be implemented in some form or another.

Timelines are suggested to ensure that reforms can become concrete. If stated, they are rough estimates, and if there are structural barriers to a particular reform being implemented within the timespan we suggest, let us know!

Categorisations are somewhat arbitrary, we just needed to break up the text for ease of reading.

Critique

General

  • EAs must be more willing to make deep critiques, both in private and in public

    • You are not alone, you are not crazy!

    • There is a much greater diversity of opinion in this community than you might think

    • Don’t assume that the people in charge must be smarter than you, and that you must be missing something if you disagree – even most of them don’t think that!

  • EA must be open to deep critiques as well as shallow critiques

    • We must temper our knee-jerk reactions against deep critiques, and be curious about our emotional reactions to arguments – “Why does this person disagree with me? Why am I so instinctively dismissive about what they have to say?”

    • We must be willing to accept the possibility that “big” things may need to be fixed and that some of our closely-held beliefs are misguided

    • Our willingness to consider a critique should be orthogonal to the seniority of the authors of the subject(s) of that critique

    • When we reject critiques, we should present our reasons for doing so

  • EAs should read more deep critiques of EA, especially external ones

    • For instance this blog and this forthcoming book

  • EA should cut down its overall level of tone/​language policing

    • Norms should still be strongly in favour of civility and good-faith discourse, but anger or frustration cannot be grounds for dismissal, and deep critique must not be misinterpreted as aggression or “signalling”

    • Civility must not be confused with EA ingroup signalling

    • Norms must be enforced consistently, applying to senior EAs just as much as newcomers

  • EAs should make a conscious effort to avoid (subconsciously/​inadvertently) using rhetoric about how “EA loves criticism” as a shield against criticism

    • Red-teaming contests, for instance, are very valuable, but we should avoid using them to claim that “something is being done” about criticism and thus we have nothing to worry about

    • “If we are so open to critique, shouldn’t we be open to this one?”

    • EAs should avoid delaying reforms by professing to take critiques very seriously without actually acting on them

  • EAs should state their reasons when dismissing critiques, and should be willing to call out other EAs if they use the rhetoric of rigour and even-handedness without its content

  • EAs, especially those in community-building roles, should send credible/​costly signals that EAs can make or agree with deep critiques without being excluded from or disadvantaged within the community

  • EAs should be cautious of knee-jerk dismissals of attempts to challenge concentrations of power, and seriously engage with critiques of capitalist modernity

  • EAs, especially prominent EAs, should be willing to cooperate with people writing critiques of their ideas and participate in adversarial collaborations

  • EA institutions and community groups should run discussion groups and/​or event programmes on how to do EA better

Institutions

  • Employees of EA organisations should not be pressured by their superiors to not publish critical work

  • Funding bodies should enthusiastically fund deep critiques and other heterodox/​“heretical” work

  • EA institutions should commission or be willing to fund large numbers of zero-trust investigations by domain-experts, especially into the components of EA orthodoxy

  • EA should set up a counter foundation that has as its main goal critical reporting, investigative journalism and “counter research” about EA and other philanthropic institutions [within 12 months]*

    • This body should be run by independent people and funded by its own donations, with a “floor” proportional to other EA funding decisions (e.g. at least one researcher/​community manager/​grant program, admin fees in a certain height)

    • If this foundation is established, EA institutions should cooperate with it

  • EA institutions should recruit known critics of EA and offer them e.g. a year of funding to write up long-form deep critiques

  • EA should establish public conference(s) or assemblies for discussing reforms within 6 months, with open invitations for EAs to attend without a selection process. For example, an “online forum of concerns”:

    • Every year invite all EAs to raise any worries they have about EA central organisations

    • These organisations declare beforehand that they will address the top concerns and worries, as voted by the attendees

    • Establish voting mechanism, e.g. upvotes on worries that seem most pressing

Red Teams

  • EA institutions should establish clear mechanisms for feeding the results of red-teaming into decision-making processes within 6 months

  • Red teams should be paid, composed of people with a variety of views, and former- or non-EAs should be actively recruited for red-teaming

    • Interesting critiques often come from dissidents/​exiles who left EA in disappointment or were pushed out due to their heterodox/​”heretical” views (yes, this category includes a couple of us)

  • The judging panels of criticism contests should include people with a wide variety of views, including heterodox/​”heretical” views

  • EA should use criticism contests as one tool among many, particularly well-suited to eliciting highly specific shallow critiques

Other

  • EAs should see EA as a set of intentions and questions (“What does it mean to ‘do the most good’, and how can I do it?”) rather than a set of answers (“AI is the highest-impact cause area, then maybe biorisk.”)

  • More people working within EA should be employees, with the associated legal rights and stability of work, rather than e.g. grant-dependent “independent researchers”

  • EA funders should explore the possibility of funding more stable, safe, and permanent positions, such as professorships

Contact Us

If you have any questions or suggestions about this article, EA, or anything else, feel free to email us at concernedEAs@proton.me