“EA is very open to some kinds of critique and very not open to others” and “Why do critical EAs have to use pseudonyms?”

Preamble

This is an extract from a post called “Doing EA Better”, which argued that EA’s new-found power and influence obligates us to solve our movement’s significant problems with respect to epistemics, rigour, expertise, governance, and power.

We are splitting DEAB up into a sequence to facilitate object-level discussion.

Each post will include the relevant parts of the list of suggested reforms. There isn’t a perfect correspondence between the subheadings of the post and the reforms list, so not all reforms listed will be 100% relevant to the section in question.

Finally, we have tried (imperfectly) to be reasonably precise in our wording, and we ask that before criticising an argument of ours, commenters ensure that it is an argument that we are in fact making.

EA is very open to some kinds of critique and very not open to others

Summary: EA is very open to shallow critiques, but not deep critiques. Shallow critiques are small technical adjustments written in ingroup language, whereas deep critiques hint at the need for significant change, criticise prominent figures or their ideas, and can suggest outgroup membership. This means EA is very good at optimising along a very narrow and not necessarily optimal path.

EA prides itself on its openness to criticism, and in many areas this is entirely justified. However, willingness to engage with critique varies widely depending on the type of critique being made, and powerful structures exist within the community that reduce the likelihood that people will speak up and be heard.

Within EA, criticism is acceptable, even encouraged, if it lies within particular boundaries, and when it is expressed in suitable terms. Here we distinguish informally between “shallow critiques” and “deep critiques”.[16]

Shallow critiques are often:

  • Technical adjustments to generally-accepted structures

    • “We should rate intervention X 12% higher than we currently do.”

    • Changes of emphasis or minor structural/​methodological adjustments

    • Easily conceptualised as “optimising” “updates” rather than cognitively difficult qualitative switches

  • Written in EA-language and sprinkled liberally with EA buzzwords

  • Not critical of capitalism

Whereas deep critiques are often:

  • Suggestive that one or more of the fundamental ways we do things are wrong

    • i.e. are critical of EA orthodoxy

    • Thereby implying that people may have invested considerable amounts of time/​effort/​identity in something when they perhaps shouldn’t have[17]

  • Critical of prominent or powerful figures within EA

  • Written in a way suggestive of outgroup membership

    • And thus much more likely to be read as hostile and/​or received with hostility

  • Political

    • Or more precisely: of a different politics to the broadly liberal[18]-technocratic approach popular in EA

EA is very open to shallow critiques, which is something we absolutely love about the movement. As a community, however, we remain remarkably resistant to deep critiques. The distinction is likely present in most epistemic communities, but EA appears to have a particularly large problem. Again, there will be exceptions, but the trend is clear.

The problem is illustrated well by the example of an entry to the recent Red-Teaming Contest: “The Effective Altruism movement is not above conflicts of interest”. It warned us of the political and ethical risks associated with taking money from cryptocurrency billionaires like Sam Bankman-Fried, and suggested that EA has a serious blind spot when it comes to (financial) conflicts of interest.[19]

The article (which did not win anything in the contest) was written under a pseudonym, as the author feared that making such a critique publicly would incur a risk of repercussions to their career. A related comment provided several well-evidenced reasons to be morally and pragmatically wary of Bankman-Fried, got downvoted heavily, and was eventually deleted by its author.

Elsewhere, critical EAs report[20] having to develop specific rhetorical strategies to be taken seriously. Making deep critiques or contradicting orthodox positions outright gets you labelled as a “non-value-aligned” individual with “poor epistemics”, so you need to pretend to be extremely deferential and/​or stupid and ask questions in such a way that critiques are raised without actually being stated.[21]

At the very least, critics have learned to watch their tone at all costs, and provide a constant stream of unnecessary caveats and reassurances in order to not be labelled “emotional” or “overconfident”.

These are not good signs.

Why do critical EAs have to use pseudonyms?

Summary: Working in EA usually involves receiving money from a small number of densely connected funding bodies/​individuals. Contextual evidence is strongly suggestive that raising deep critiques will drastically reduce one’s odds of being funded, so many important projects and criticisms are lost to the community.

There are several reasons people may not want to publicly make deep critiques, but the one that has been most impactful in our experience has been the role of funding.[22]

EA work generally relies on funding from EA sources: we need to pay the bills, and the kinds of work EA values are often very difficult to fund via non-EA sources. Open Philanthropy, and previously FTX, has/​had an almost hegemonic funding role in many areas of existential risk reduction, as well as several other domains. This makes EA funding organisations and even individual grantmakers extremely powerful.

Prominent funders have said that they value moderation and pluralism, and thus people (like the writers of this post) should feel comfortable sharing their real views when they apply for funding, no matter how critical they are of orthodoxy.

This is admirable, and we are sure that they are being truthful about their beliefs. Regardless, it is difficult to trust that the promise will be kept when one, for instance:

  • Observes the types of projects (and people) that succeed (or fail) at acquiring funding

    • i.e. few, if any, deep critiques or otherwise heterodox/​“heretical” works

  • Looks into the backgrounds of grantmakers and sees how they appear to have very similar backgrounds and opinions (i.e they are highly orthodox)

  • Experiences the generally claustrophobic epistemic atmosphere of EA

  • Hears of people facing (soft) censorship from their superiors because they wrote deep critiques of the ideas of prominent EAs

    • Zoe Cremer and Luke Kemp lost “sleep, time, friends, collaborators, and mentors” as a result of writing Democratising Risk, a paper which was critical of some EA approaches to existential risk.[23] Multiple senior figures in the field attempted to prevent the paper from being published, largely out of fear that it would offend powerful funders. This saga caused significant conflict within CSER throughout much of 2021.

  • Sees the revolving door and close social connections between key donors and main scholars in the field

  • Witnesses grantmakers dismiss scientific work on the grounds that the people doing it are insufficiently value-aligned

    • If this is what is said in public (which we have witnessed multiple times), what is said in private?

  • Etc.

Thus, it is reasonable to conclude that if you want to get funding from an EA body, you must not only try to propose a good project, but one that could not be interpreted as insufficiently “value-aligned”, however the grantmakers might define it. If you have an idea for a project that seems very important, but could be read as a “deep critique”, it is rational for you to put it aside.

The risk to one’s career is especially important given the centralisation of funding bodies as well as the dense internal social network of EA’s upper echelons.[24]

Given this level of clustering, it is reasonable to believe that if you admit to holding heretical views on your funding application, word will spread, and thus you will quite possibly never be funded by any other funder in the EA space, never mind any other consequences (e.g. gatekeeping of EA events/​spaces) you might face. For a sizeable portion of EAs, the community forms a very large segment of one’s career trajectory, social life, and identity; not things to be risked easily.[25] For most, the only robust strategy is to keep your mouth shut.[26]

Grantmakers: You are missing out on exciting, high potential impact projects due to these processes. When the stakes are as high as they are, verbal assurances are unfortunately insufficient. The problems are structural, so the solutions must be structural as well.

Suggested reforms

Below, we have a preliminary non-exhaustive list of relevant suggestions for structural and cultural reform that we think may be a good idea and should certainly be discussed further.

It is of course plausible that some of them would not work; if you think so for a particular reform, please explain why! We would like input from a range of people, and we certainly do not claim to have all the answers!

In fact, we believe it important to open up a conversation about plausible reforms not because we have all the answers, but precisely because we don’t.

Italics indicates reforms strongly inspired by or outright stolen from Zoe Cremer’s list of structural reform ideas. Some are edited or merely related to her ideas; they should not be taken to represent Zoe’s views.

Asterisks (*) indicate that we are less sure about a suggestion, but sure enough that we think they are worth considering seriously, e.g. through deliberation or research. Otherwise, we have been developing or advocating for most of these reforms for a long time and have a reasonable degree of confidence that they should be implemented in some form or another.

Timelines are suggested to ensure that reforms can become concrete. If stated, they are rough estimates, and if there are structural barriers to a particular reform being implemented within the timespan we suggest, let us know!

Categorisations are somewhat arbitrary, we just needed to break up the text for ease of reading.

Critique

General

  • EAs must be more willing to make deep critiques, both in private and in public

    • You are not alone, you are not crazy!

    • There is a much greater diversity of opinion in this community than you might think

    • Don’t assume that the people in charge must be smarter than you, and that you must be missing something if you disagree – even most of them don’t think that!

  • EA must be open to deep critiques as well as shallow critiques

    • We must temper our knee-jerk reactions against deep critiques, and be curious about our emotional reactions to arguments – “Why does this person disagree with me? Why am I so instinctively dismissive about what they have to say?”

    • We must be willing to accept the possibility that “big” things may need to be fixed and that some of our closely-held beliefs are misguided

    • Our willingness to consider a critique should be orthogonal to the seniority of the authors of the subject(s) of that critique

    • When we reject critiques, we should present our reasons for doing so

  • EAs should read more deep critiques of EA, especially external ones

    • For instance this blog and this forthcoming book

  • EA should cut down its overall level of tone/​language policing

    • Norms should still be strongly in favour of civility and good-faith discourse, but anger or frustration cannot be grounds for dismissal, and deep critique must not be misinterpreted as aggression or “signalling”

    • Civility must not be confused with EA ingroup signalling

    • Norms must be enforced consistently, applying to senior EAs just as much as newcomers

  • EAs should make a conscious effort to avoid (subconsciously/​inadvertently) using rhetoric about how “EA loves criticism” as a shield against criticism

    • Red-teaming contests, for instance, are very valuable, but we should avoid using them to claim that “something is being done” about criticism and thus we have nothing to worry about

    • “If we are so open to critique, shouldn’t we be open to this one?”

    • EAs should avoid delaying reforms by professing to take critiques very seriously without actually acting on them

  • EAs should state their reasons when dismissing critiques, and should be willing to call out other EAs if they use the rhetoric of rigour and even-handedness without its content

  • EAs, especially those in community-building roles, should send credible/​costly signals that EAs can make or agree with deep critiques without being excluded from or disadvantaged within the community

  • EAs should be cautious of knee-jerk dismissals of attempts to challenge concentrations of power, and seriously engage with critiques of capitalist modernity

  • EAs, especially prominent EAs, should be willing to cooperate with people writing critiques of their ideas and participate in adversarial collaborations

  • EA institutions and community groups should run discussion groups and/​or event programmes on how to do EA better

Institutions

  • Employees of EA organisations should not be pressured by their superiors to not publish critical work

  • Funding bodies should enthusiastically fund deep critiques and other heterodox/​“heretical” work

  • EA institutions should commission or be willing to fund large numbers of zero-trust investigations by domain-experts, especially into the components of EA orthodoxy

  • EA should set up a counter foundation that has as its main goal critical reporting, investigative journalism and “counter research” about EA and other philanthropic institutions [within 12 months]*

    • This body should be run by independent people and funded by its own donations, with a “floor” proportional to other EA funding decisions (e.g. at least one researcher/​community manager/​grant program, admin fees in a certain height)

    • If this foundation is established, EA institutions should cooperate with it

  • EA institutions should recruit known critics of EA and offer them e.g. a year of funding to write up long-form deep critiques

  • EA should establish public conference(s) or assemblies for discussing reforms within 6 months, with open invitations for EAs to attend without a selection process. For example, an “online forum of concerns”:

    • Every year invite all EAs to raise any worries they have about EA central organisations

    • These organisations declare beforehand that they will address the top concerns and worries, as voted by the attendees

    • Establish voting mechanism, e.g. upvotes on worries that seem most pressing

Red Teams

  • EA institutions should establish clear mechanisms for feeding the results of red-teaming into decision-making processes within 6 months

  • Red teams should be paid, composed of people with a variety of views, and former- or non-EAs should be actively recruited for red-teaming

    • Interesting critiques often come from dissidents/​exiles who left EA in disappointment or were pushed out due to their heterodox/​”heretical” views (yes, this category includes a couple of us)

  • The judging panels of criticism contests should include people with a wide variety of views, including heterodox/​”heretical” views

  • EA should use criticism contests as one tool among many, particularly well-suited to eliciting highly specific shallow critiques

Epistemics

General

  • EAs should see EA as a set of intentions and questions (“What does it mean to ‘do the most good’, and how can I do it?”) rather than a set of answers (“AI is the highest-impact cause area, then maybe biorisk.”)

  • EA should study social epistemics and collective intelligence more, and epistemic efforts should focus on creating good community epistemics rather than merely good individual epistemics

    • As a preliminary programme, we should explore how to increase EA’s overall levels of diversity, egalitarianism, and openness

  • EAs should practise epistemic modesty

    • We should read much more, and more widely, including authors who have no association with (or even open opposition to) the EA community

    • We should avoid assuming that EA/​Rationalist ways of thinking are the only or best ways

    • We should actively seek out not only critiques of EA, but critiques of and alternatives to the underlying premises/​assumptions/​characteristics of EA (high modernism, elite philanthropy, quasi-positivism, etc.)

    • We should stop assuming that we are smarter than everybody else

  • When EAs say “value-aligned”, we should be clear about what we mean

    • Aligned with what values in particular?

    • We should avoid conflating the possession of the general goal of “doing the most good” with subscription to the full package of orthodox views

  • EAs should consciously separate:

    • An individual’s suitability for a particular project, job, or role

    • Their expertise and skill in the relevant area(s)

    • The degree to which they are perceived to be “highly intelligent”

    • Their perceived level of value-alignment with EA orthodoxy

    • Their seniority within the EA community

    • Their personal wealth and/​or power

  • EAs should make a point of engaging with and listening to EAs from underrepresented disciplines and backgrounds, as well as those with heterodox/​“heretical” views

  • The EA Forum should have its karma/​commenting system reworked to remove structural forces towards groupthink within 3 months. Suggested specific reforms include, in gently descending order of credence:

    • Each user should have equal voting weight

    • Separate agreement karma should be implemented for posts as well as comments

    • A “sort by controversial” option should be implemented

    • Low-karma comments should not be hidden

    • Low-karma comments should be occasionally shunted to the top

  • EA should embark on a large-scale exploration of “theories of change”: what are they, how do other communities conceptualise and use them, and what constitutes a “good” one? This could include:*

    • Debates

    • Lectures from domain-experts

    • Panel discussions

    • Series of forum posts

    • Hosting of experts by EA institutions

    • Competitions

    • EAG framed around these questions

    • Etc.

  • When EA organisations commission research on a given question, they should publicly pre-register their responses to a range of possible conclusions

Ways of Knowing

  • EAs should consider how our shared modes of thought may subconsciously affect our views of the world – what blindspots and biases might we have created for ourselves?

  • EAs should increase their awareness of their own positionality and subjectivity, and pay far more attention to e.g. postcolonial critiques of western academia

    • History is full of people who thought they were very rational saying very silly and/​or unpleasant things: let’s make sure that doesn’t include us

  • EAs should study other ways of knowing, taking inspiration from a range of academic and professional communities as well as indigenous worldviews

Diversity

  • EA institutions should select for diversity

    • With respect to:

      • Hiring (especially grantmakers and other positions of power)

      • Funding sources and recipients

      • Community outreach/​recruitment

    • Along lines of:

      • Academic discipline

      • Educational & professional background

      • Personal background (class, race, nationality, gender, etc.)

      • Philosophical and political beliefs

    • Naturally, this should not be unlimited – some degree of mutual similarity of beliefs is needed for people to work together – but we do not appear to be in any immediate danger of becoming too diverse

  • Previous EA involvement should not be a necessary condition to apply for specific roles, and the job postings should not assume that all applicants will identify with the label “EA”

  • EA institutions should hire more people who have had little to no involvement with the EA community providing that they care about doing the most good

  • People with heterodox/​“heretical” views should be actively selected for when hiring to ensure that teams include people able to play “devil’s advocate” authentically, reducing the need to rely on highly orthodox people accurately steel-manning alternative points of view

  • Community-building efforts should be broadened, e.g. involving a wider range of universities, and group funding should be less contingent on the perceived prestige of the university in question and more focused on the quality of the proposal being made

  • EA institutions and community-builders should promote diversity and inclusion more, including funding projects targeted at traditionally underrepresented groups

  • A greater range of people should be invited to EA events and retreats, rather than limiting e.g. key networking events to similar groups of people each time

  • There should be a survey on cognitive/​intellectual diversity within EA

  • EAs should not make EA the centre of their lives, and should actively build social networks and career capital outside of EA

Openness

  • Most challenges, competitions, and calls for contributions (e.g. cause area exploration prizes) should be posted where people not directly involved within EA are likely to see them (e.g. Facebook groups of people interested in charities, academic mailing lists, etc.)

  • Speaker invitations for EA events should be broadened away from (high-ranking) EA insiders and towards, for instance:

    • Subject-matter experts from outside EA

    • Researchers, practitioners, and stakeholders from outside of our elite communities

      • For instance, we need a far greater input from people from Indigenous communities and the Global South

  • External speakers/​academics who disagree with EA should be invited give keynotes and talks, and to participate in debates with prominent EAs

  • EAs should make a conscious effort to seek out and listen to the views of non-EA thinkers

    • Not just to respond!

  • EAs should remember that EA covers one very small part of the huge body of human knowledge, and that the vast majority of interesting and useful insights about the world have and will come from outside of EA

Funding & Employment

Grantmaking

  • Grantmakers should be radically diversified to incorporate EAs with a much wider variety of views, including those with heterodox/​”heretical” views

  • Funding frameworks should be reoriented towards using the “right tool for the right job”

    • Optimisation appears entirely appropriate in well-understood, predictable domains, e.g. public health interventions against epidemic diseases[80]

    • But robustness is far superior when addressing domains of deep uncertainty, areas of high complexity, low-probability high-impact events, long timescales, poorly-defined phenomena, and significant expert disagreement, e.g. existential risk

    • Optimising actions should be taken on the basis of high-quality evidence, e.g. meta-reviews or structured expert elicitations, rather than being used as the default or even the only mode of operation

  • Grantmaking organisations should commission independent external evaluations of the efficacy of their work (e.g. the success rates of grantmakers in forecasting the impact or success of projects) within 6 months, and release the results of any internal work they have done to this end

  • Within 5 years, EA funding decisions should be made collectively

    • First set up experiments for a safe cause area with small funding pots that are distributed according to different collective decision-making mechanisms

      • For example rotating panels, various forms of lottocracy

      • Subject matter experts are always used and weighed appropriately

    • Experiment in parallel with randomly selected samples of EAs evaluating the decisions of one existing funding committee

      • Existing decision-mechanisms are thus ‘passed through’ an accountability layer

    • All decision mechanisms should have a deliberation phase (arguments are collected and weighed publicly) and a voting phase (majority voting, quadratic voting, etc.)

    • Depending on the cause area and the type of choice, either fewer (experts + randomised sample of EAs) or more people (any EA or beyond) should take part in the funding decision

  • A certain proportion EA of funds should be allocated by lottery after a longlisting process to filter out the worst/​bad-faith proposals*

    • The outcomes of this process should be evaluated in comparison to EA’s standard grantmaking methods as well as other alternatives

  • Grantmaking should require detailed and comprehensive conflict of interest reporting

Employment

  • More people working within EA should be employees, with the associated legal rights and stability of work, rather than e.g. grant-dependent “independent researchers”

  • EA funders should explore the possibility of funding more stable, safe, and permanent positions, such as professorships

Contact Us

If you have any questions or suggestions about this article, EA, or anything else, feel free to email us at concernedEAs@proton.me