Decision-making and decentralisation in EA

This post is a slightly belated contribution to the Strategy Fortnight. It represents my personal takes only; I’m not speaking on behalf of any organisation I’m involved with. For some context on how I’m now thinking about talking in public, I’ve made a shortform post here. Thanks to the many people who provided comments on a draft of this post.

Intro and Overview

How does decision-making in EA work? How should it work? In particular: to what extent is decision-making in EA centralised, and to what extent should it be centralised?

These are the questions I’m going to address in this post. In what follows, I’ll use “EA” to refer to the actual set of people, practices and institutions in the EA movement, rather than EA as an idea.

My broad view is that EA as a whole is currently in the worst of both worlds with respect to centralisation. We get the downsides of appearing (to some) like a single entity without the benefits of tight coordination and clear decision-making structures that centralised entities have.

It’s hard to know whether the right response to this is to become more centralised or less. In this post, I’m mainly hoping just to start a discussion of this issue, as it’s one that impacts a wide number of decisions in EA. [1] At a high level, though, I currently think that the balance of considerations tends to push in favour of decentralisation relative to where we are now.

But centralisation isn’t a single spectrum, and we can break it down into sub-components. I’ll talk about this in more depth later in the post, but here are some ways in which I think EA should become more decentralised:

Perception: At the very least, wider perception should reflect reality on how (de)centralised EA is. That means:

  • Core organisations and people should communicate clearly (and repeatedly) about their roles and what they do and do not take ownership for. (I agree with Joey Savoie’s post, which he wrote independently of this one.)

  • We should, insofar as we can, cultivate a diversity of EA-associated public figures.

  • [Maybe] The EA Forum could be renamed. (Note that many decisions relating to CEA will wait until it has a new executive director).

  • [Maybe] CEA could be renamed. (This is suggested by Kaleem here.)

Funding: It’s hard to fix, but it would be great to have a greater diversity of funding sources. That means:

  • Recruiting more large donors.

  • Some significant donor or donors start a regranters program.

  • More people pursue earning to give, or donate more (though I expect this “diversity of funding” consideration to have already been baked-in to most people’s decision-making on this). Luke Freeman has a moving essay about the continued need for funding here.

Decision-making:

  • Some projects that are currently housed within EV could spin out and become their own legal entities. The various different projects within EV have each been thinking through whether it makes sense for them to spin out. I expect around half of the projects will ultimately spin out over the coming year or two, which seems positive from my perspective.

  • [Maybe] CEA could partly dissolve into sub-projects.

Culture:

  • We could try to go further to emphasise that there are many conclusions that one could come to on the grounds of EA values and principles, and celebrate cases where people pursue heterodox paths (as long as their actions are clearly non-harmful).

Here are some ways in which I think EA could, ideally, become more centralised (though these ideas crucially depend on someone taking them on and making them happen):

Information flow:

  • Someone could create a guide to what EA is, in practice: all the different projects, and the roles they fill, and how they relate to one another.

  • Someone could create something like an intra-EA magazine, providing the latest updates and featuring interviews with core EAs.

  • Someone could take on a project of consolidating the best EA content and ideas, for example into a quarterly journal.

Provision of other services that benefit the EA ecosystem as a whole:

  • Someone could set up an organisation or a team that’s explicitly taking on the task of assessing, monitoring and mitigating ways in which EA faces major risks, and could thereby fail to provide value to the world, or even cause harm.

  • Someone could set up a leadership fast-track program.

And here are a couple of ways in which things are already highly decentralised, and in my view shouldn’t change:

Ownership:

  • No-one owns “EA” as a brand, or its core ideas.

Group membership:

  • Anyone can call themselves a part of the EA movement.



Thinking through the issue of decentralisation has also led me to plan to make some changes to how I operate in a decentralised direction:

Decision-making:

  • I plan to step down from the board of Effective Ventures UK once we have more capacity.

Perception:

  • I plan to go further to distance myself from the idea that I’m “the face” of EA, or a spokesperson for all of EA. (This hasn’t been how I’ve ever seen myself, but is how I’m sometimes perceived.)

In a being-helpful-where-I-can way (rather than “taking-ownership-for-this-thing” way), I’m also spending some time trying to bring in new donors, and help support other potential public figures. I’m not doing anything, for now, in the direction of further centralisation.

A final caveat I’ll make on all the above is that this is how I see things for now. The question of centralisation is super hard, and what makes sense will change depending on the circumstances of the time. Early EA (prior to ~2015) was notably less centralised than it was after that point, and I think that at that time increased centralisation was a good thing. In the future, I’m sure there’ll be further changes that will make sense, too, in both decentralised and centralised directions.

The rest of this post is structured as follows:

How decision-making works in EA

A number of people have commented on the Forum that they don’t feel they understand how decision-making works in EA, and I’ve sometimes seen misinformation floating around; this confusion is often about how centralised EA is.

So I’m going to try to clarify things a bit. It’s tough to describe the situation exactly, because the reality is a middle ground between a highly centralised decision-making entity like a company and complete anarchy. And where exactly EA lies between those two extremes often depends on what exactly we’re talking about.

Anyway, here goes. Some ways in which the EA movement is centralised:

  • A single funder (Open Philanthropy, “OP”) allocates the large majority (around 70%[2]) of funding that goes to EA movement-building. If you want to do an EA movement-building project with a large budget ($1m/​yr or more), you probably need funding from OP, for the time being at least. Vaidehi Agarwalla’s outstandingly helpful recent post gives more information.

  • Effective Ventures US and UK (“EV”) currently house the majority of EA movement-building work.

  • The senior figures in EA are in fairly regular communication with each other (though there’s probably less UK<>US communication than there should be).

    • It’s not totally determinate who is a “senior figure”, and it varies over time, but the current list of people would at least include: Nick Beckstead, Alexander Berger, Max Dalton, Holden Karnofsky, Howie Lempel, Brenton Mayer, Tasha McCauley, Toby Ord, Lincoln Quirk, Nicole Ross, Eli Rose, Zach Robinson, James Snowden, Ben Todd, Ben West, Claire Zabel, and me. All of these people have had or currently have positions at OP or senior positions at EV.

  • Usually, there’s an annual meeting, the Coordination Forum (formerly called the “Leaders’ Forum”), usually of around 30 people, which is run by CEA largely as an un-conference, for senior or core people. This year, there hasn’t been an equivalent so far, but there will probably be one later in the year.

  • Normally, before someone embarks on a major project, they get feedback from a wide variety of people on the project, and there’s a culture of not taking “unilateralist” action if most other people think that the project is harmful, even if it seems good to the person considering it. (Ideally, in a binary choice and given a number of assumptions, one pursues the action if it’s positive expected value on the median estimate of the action’s expected value among the people assessing it. It’s debatable the extent to which this rule is followed in practice in EA, or the extent to which the simple models in that paper are good guides to reality.)


Some ways in which EA is decentralised:

  • There’s no one, and no organisation, who conceives of themselves as taking ownership of EA, or as being responsible for EA as a whole.

    • CEA doesn’t see itself in this way. For example, here it says, “We do not think of ourselves as having or wanting control over the EA community. We believe that a wide range of ideas and approaches are consistent with the core principles underpinning EA, and encourage others to identify and experiment with filling gaps left by our work.”

    • EV doesn’t see itself in this way, and it includes projects that don’t consider themselves to be part of the EA movement or engaged in EA movement-building (such as Centre for the Governance of AI, Longview Philanthropy, and Wytham Abbey.)

    • The partial exception to this is CEA’s community health team, on issues of misconduct in the community, though even there they are well aware of the limited amount of control they have.

  • There is no trademark on “effective altruism.” Anyone can start a project that has “effective altruism” in the name.

  • There’s no requirement for EA organisations to be affiliated with Effective Ventures, and many aren’t, such as Rethink Priorities, the Global Challenges Project and some country-level organisations such as Effective Altruism UK.

  • There are a number of distinct core EA projects (CEA, 80,000 Hours, Giving What We Can, Rethink Priorities, Global Challenges Project, etc.) that make independent strategic plans.

  • There’s no CEO or “leadership team” of EA. There aren’t any formal roles that would be equivalent to C-level executives at a company. It’s vague who counts as a “senior EA”.

    • Across Effective Ventures US and UK, for example, in practice decision-making is currently shared between two boards, two CEOs, and the CEO or Executive Director of every project within the legal entities (e.g. CEA, 80,000 Hours, Giving What We Can, EA Funds, Centre for the Governance of AI, etc), who develop their projects’ annual plans and strategy, including making many of the most important decisions relevant to the movement as a whole (e.g. how to do marketing, and which target audience to have).

  • There are a number of what in absolute terms are major donors, as well as a diversity of funding opportunities from places like EA Funds and the Survival and Flourishing Fund. They are generally very keen to fund things that they think OP is overlooking.

  • Generally, I find there’s a very positive attitude among senior EAs for competition within the EA ecosystem.

    • The Global Challenges Project is illustrative. Emma Abele and James Aung thought that CEA was doing a suboptimal job with (some) student groups. So they set up their own project, got funding from Open Philanthropy, and did a great job.

    • Similarly, Probably Good was set up as being (in some ways) a competitor to 80,000 Hours, because the founders thought that 80,000 Hours was lacking in some important ways; it has received support from Open Philanthropy and encouragement from 80,000 Hours.

In general, coordination is pretty organic and informal, and happens in one of two ways:

  1. People or organisations come up with plans, proactively get feedback on their plans, get told the ways in which their plans are good or bad, and they revise them.

  2. Someone (or some people) have an idea that they think should exist in the world, and then shop it around to see if someone wants to take it on.

Overall, the best analogy I can think of is that EA functions as a “do-ocracy”. Here is a short article on do-ocracy, which is well worth reading. A slogan to define do-ocracy, which I like, is: “If you want something done, do it, but remember to be excellent to each other when doing so.” (Where, within EA, the ‘be excellent’ caveat covers non-unilateralism and taking externalities across the movement seriously.) I think this both represents how EA actually works, and how most senior EAs understand it as working.

I think that the main way EA departs from being a do-ocracy is that many people might not perceive it that way (very naturally—because it hasn’t yet been publicly defined that way); there’s a culture where sometimes people feel afraid of unilateralism, even in cases where that fear doesn’t make sense. If that’s true, it means that some people don’t do things because they feel they aren’t “allowed” to, or perhaps because they think that someone else has responsibility, or has figured it all out.

Compared to a highly-centralised entity like a company, the semi-decentralised /​ do-ocracy nature of EA has a few important upshots. This is the part of the post I feel most nervous about writing, because I’m worried that others will interpret this as me (and other “EA leaders”) disavowing responsibility; I’m already anxiously visualising criticism on this basis. But it seems both important and true to me, so I still want to convey it. The upshots are:

  • If something bad happens, it’s natural to look for who is formally responsible for the problem. (And, in a company, there’s always someone who is ultimately formally responsible: responsibility bottoms out with the CEO). But, often, the answer is that there’s no one who was formally responsible, and no one who was formally responsible for making sure that someone was formally responsible.

  • It’s difficult for calls along the lines of, “Something should be done about X”, or “EA should do Y” to have traction, unless the call to action is targeted at some particular person or project, because there’s no one who’s ultimately in charge of EA, and who is responsible for generally making the whole thing go well. (See Lizka Vaintrob’s excellent post on this here).

  • The reason for something happening or not happening is often less deep than one might expect, boiling down to “someone tried to make it happen” or “no one tried to make it happen”, rather than “this was the result of some carefully considered overarching strategy”. Moreover, the list of things it would be good to do is very long, and the bottleneck is normally there being someone with the desire, ability and spare capacity to take it on.

  • Thoughts of “I’m sure this is the way it is because some more well-informed people have figured it out” are often incorrect, especially about things that aren’t happening.

I get the sense that the above points mark a major difference in how many people who work for core EA orgs see decision-making in EA working, and how it’s perceived by some in the wider community. I have some speculative hypotheses about why there’s this discrepancy, but it’s a big digression so I’ve put it into a footnote. [3]

When thinking about how centralised or not EA is, or should be, it can be helpful to have in mind concrete potential analogies, and the strengths and weaknesses they have. Here’s a spectrum of organisations, in descending order from more to less centralised (as it seems to me):

  • communist dictatorships (e.g. North Korea)

  • the US army

  • most companies (e.g. Apple)

  • highly centralised religious groups (e.g. Mormonism)

  • franchises (e.g. McDonald’s)

  • the Scouts

  • mixed economies (the US, UK)

  • registered clubs and sports groups (e.g. The United States Golf Association; USA Basketball)

  • intergovernmental decision-making

  • fairly decentralised religious groups (e.g. Protestantism, Buddhism)

  • most social movements (e.g. British Abolitionism, the American Civil Rights Movement)

  • the scientific community

  • most intellectual movements (e.g. behaviourism)

  • the US startup scene

This is highly subjective, but it seems to me the overall level of centralisation within EA is currently similar to fairly decentralised religious groups, and many social movements.

It can also be helpful to break down “centralisation” into sub-dimensions, such as:

  • Decision-making power: To what extent is what the group as a whole does determined by a small group of decision-makers?

    • Are these decision-making structures formal or informal?

    • Do these decision-makers have control over resources, including financial resources?

    • Who is accountable for success or failure? Are these accountability mechanisms formal or informal?

  • Ownership: Is there legal ownership of constitutive aspects of the group (e.g. intellectual property, branding)?

  • Group membership: How strong is the ability to determine membership in the group: How hard is it for someone in the group to leave? How hard is it for someone outside of the group to enter? And how tightly-defined is group membership?

    • Are there formal mechanisms for doing this, or merely informal?

  • Information flow: To what extent does information flow merely from decision-makers down to other group members, and to what extent does it flow back up to decision-makers, or horizontally from one non-decision-maker to another?

  • Culture: Do people within the group feel empowered to think and act autonomously, or do they feel they ought to defer to the views of high-status individuals within the group, or to the majority view within the group? [4]

On these dimensions, it seems to me that EA is currently fairly decentralised on group membership and information flow, very decentralised on ownership, and in the middle on decision-making power [5] and culture.

Should EA be more or less centralised?

At the moment, it seems to me we’re in the worst of both worlds, where many people think that EA is highly centralised, whereas really it’s in-between. We get the downsides of appearing (to some) like one entity without the benefits of tight coordination. For many issues, there’s a risk that people generally feel that the “central” groups and people will be in charge of all issues impacting EA and so there’s no need to do anything about any gaps they perceive, even when that’s not the case.

I’ll talk more about specific ways EA could centralise or decentralise in the next section. If we were going broadly in the direction of further centralisation, then, for example, CEA could explicitly consider itself as governing the community, and explicitly take on more roles. Going further in that direction, there could even be a membership system for being part of EA, like the Scouts has. If we were going broadly in the direction of further decentralisation, then CEA could change its name and perhaps separate into several distinct projects, some more projects could spin out of Effective Ventures, and we could all more loudly communicate that EA is a decentralised movement and cultivate a decentralised culture.

I’ll give the broad case both for and against further centralisation or decentralisation, and then get into specifics.

The broad case for further centralisation includes (in no particular order):

  • There are some issues or activities that concern the community as a whole, or where there are major positive /​ negative externalities, or natural monopolies. These include:

    • The handling of bad actors within EA, who can cause harm to the whole of the movement.

    • Infohazards (e.g. around bio x-risk).

    • Issues that impact on EA’s brand. For example, whether to associate with a very public new donor, or whether to run a public EA campaign.

  • Given the ubiquity of fat-tailed distributions, semi-centralisation is almost inevitable. Wealth is heavily fat-tailed, so it’s very likely that one or a small number of funders end up accounting for most funding. [6] Similarly, fame (measured by things like number of social media followers, media mentions, or books sold) also seems to be fat-tailed, so it’s likely that one or a small number of people will end up accounting for most of the attention that goes towards specific people. We can try to combat this, but we’ll be fighting against strong forces in the other direction.

  • The nonprofit world is very unlike a marketplace. Crucially, there isn’t a price mechanism which can aggregate decentralised information and indicate how the provision of goods and services should be prioritised and thereby incentivise the production of goods and services that are most needed. [7] So common arguments within economics that, under some conditions, favour something like market competition, don’t cleanly port over. [8]

  • Centralisation can enable greater control over the movement in potentially-desirable ways. (Somewhat analogously, governments can help control an economy by printing money, setting interest rates, and so on.)

    • For example, as movements grow, there’s a risk that their ideals become diluted over time, regressing to the mean of wider society’s views. Centralisation can be a way of preventing or slowing that tendency; perhaps the ideal growth rate for EA is faster or slower than the “organic” growth rate.

    • In the absence of coordination, some projects might get started, or continue, for “unilateralist’s curse” type reasons: naturally, there will be a range of assessments of how good a potential or existing project is, and in the absence of coordination (or at least information-sharing), those who think the project is best will go ahead with it, even if it’s overall a bad idea.

    • Centralisation can help enforce quality control, preventing low-integrity or low-quality projects from damaging the wider public’s perception of EA. [9]

  • Decentralisation risks redundancy, with multiple people working on very similar projects. Centralisation gets benefits from economies of scale — there are certain things you only need to do or figure out once (e.g. setting up a legal entity, having accounting, legal, HR departments (etc)).

  • No matter how the EA movement is structured, onlookers will often treat it as a single entity, interpreting actions from any one person or organisation as representative of the whole.

  • It seems harder for a decentralised movement to centralise than it is for a centralised movement to decentralise. So, trying to be as centralised as possible at the moment preserves option value.

The broad case for further decentralisation includes (in no particular order):

  • People in EA are doing a wide variety of things, and it’s hard for one organisation to speak to and satisfy all the different sub-cultures within EA at once. There are very different needs and interests from, for example, student activists, academics, people working in national security, old-time rationalists, major philanthropists, etc, and among people working in different cause areas.

  • Relatedly, decentralised decision-making benefits from local knowledge. The way EA should be thought about or communicated across causes and countries will be very different; decisions about how EA should be adapted to those contexts are probably best done by people with the most knowledge about those contexts.

  • Even if the nonprofit world is significantly unlike a for-profit marketplace, there are still good arguments for thinking that competition can be highly beneficial, resulting in better organisations and products. This is both because (i) competition means that people can choose the better service; (ii) competition incentivises better service provision among the competitors. In contrast, centrally-planned groups are often slow-moving, bureaucratic, and ineffective.

  • Any centralised entity would be very unlike a government. It couldn’t forcibly tax its members, or enforce its policies through its own legal system. So common arguments within economics and political science that, under some conditions, favour something like government action, don’t cleanly port over.

  • Most activities within EA don’t concern the community as a whole, or have major positive /​ negative externalities, or natural monopolies.

  • Centralisation can be less empowering. Suppose that there’s some activity X that would be well worth doing, and benefit all of EA, but the central entities haven’t done it (for bad reasons). Then, if the widespread understanding is that the movement is centralised, X just won’t happen: other parties will believe that the central entities have got it covered.

  • Centralisation is more fragile in some ways. If, for example, there was only one EA organisation, then the collapse of that one organisation would mean the collapse of EA as a whole.

  • There’s a risk that EA ossifies in thought, becoming locked-in to a certain set of founding beliefs or focuses. In particular, if there’s a set of early highly influential thinkers, and the views of those early thinkers become the default such that it’s much harder for the movement as a whole to reason away from those views, then, in the likely event that those early thinkers are mistaken in some important ways, that would be very bad. This risk could be especially likely if people who aren’t sympathetic to those particular beliefs are more likely to bounce off the movement, so the movement becomes disproportionately populated with people sympathetic to those beliefs. Centralisation might increase this risk.

    • This seems to happen in science. Max Planck famously quipped that science advances “one funeral at a time” and some recent evidence (which I haven’t vetted) suggests that’s correct.[10]

    • And it often seems to happen in other social and intellectual movements, too.[11]

  • The tractability of further centralisation seems low. This is for a few reasons:

    • If there’s some central grand plan for how EA should be, if some people disagree with that plan, there’s not really much in the way of enforcement that a central body can do. At the moment, people can’t get fired or kicked out of EA: they can get disinvited to EA events, not-funded by groups that agree, removed from the EA Forum, and information about them being a bad actor can be percolated, but that’s not necessarily enough to prevent them continuing. And these actions would seem harsh as a response to someone simply disagreeing with a strategic plan. Ultimately, if some person, organisation or group wants to do something and call it EA, they just can. This means that centralisation efforts risk being toothless.

    • One could try to change this, for example by having a “membership” system like many political parties have and some advocacy groups (e.g. the Sierra Club, or the NAACP) have. But I think that, even if that seemed desirable, trying to implement that seems extremely hard.

    • It’s hard to see who would lead a centralisation effort. They’d need to have a combination of ability, desire and legitimacy within the movement, without it also being the case that it’s more important for them to work on something else.

Of these, the biggest considerations in favour of centralisation, in my view, are option value and the handling of bad actors. The biggest considerations in favour of decentralisation are worries about ossification and lock-in, the benefits of competition, and, above all, that I think the tractability of further centralisation seems low.

As I mentioned at the outset, there’s not a single spectrum of centralisation to decentralisation, and I’ll get into specifics in the next section. Overall, I think the arguments on average broadly tend towards further decentralisation rather than centralisation. But I’m still very unsure: there are tough tradeoffs here. If centralised, you get fewer bad projects but fewer good projects, too; you get less redundancy but less innovation. So, even though I’m broadly in favour of further decentralisation, if there was, for example, a new Executive Director of CEA or someone at Open Philanthropy who really wanted to take the mantle on, and could build the legitimacy needed to pull it off, I’d be interested to see them experiment with centralisation in some areas and see how that goes.

Going back to the list of comparisons: I feel like the level of decentralisation in the scientific community or intellectual movements are in the vein of what we should aim for. The analogy I like best, at the moment, is with specific scientific/​academic communities. I know most about the analytic philosophy community. Here are some notable aspects of that community, where I think the analogy is helpful (feel free to skip the sub-bullets if you aren’t interested in the details. I’m also not claiming that we should emulate the analytic philosophy community, just that it’s an interesting analogue in terms of level of (de)centralisation):

  • Centralised bodies tend to take the form of provision of services rather than top-down control. They tend to arise because some person or group has unilaterally offered them and they’ve had widespread adoption. Often, there are different groups offering the same services.

    • The closest thing to a centralised body in analytic philosophy is The American Philosophical Association. What they do is limited, though, and as a philosopher you rarely interact with them or think about them; they aren’t a very powerful force within the field of analytic philosophy.

      • It runs what I believe are the three largest philosophy conferences. First-round interviews for US tenure-track philosophy jobs are usually held at one of these conferences.

      • It provides some grants, fellowships, and funds.

      • It provides some online resources, too, although they don’t seem very influential.

      • I think it used to host adverts for jobs in philosophy, but then PhilJobs did the same thing but better so they now use PhilJobs.

    • Some other examples of “centralised” services in philosophy:

      • Journals. Nowadays, their key role is to act as quality-stamps on philosophical output. The prestige of different journals is generally well-known, and publication in a particular journal is understood as a way of (i) indicating to other philosophers that this piece of work might be worth looking at; (ii) providing evidence of the quality of a philosopher’s work for hiring committees and tenure committees.

        • Different journals are run by different groups, traditionally by universities or publishers. More recently, Philosophers’ Imprint was founded by two philosophers who thought they could create an online and open-access journal that was better-run than existing journals, and it’s been very successful.

      • The Philosophical Gourmet Report ranks graduate programs in philosophy, by surveying leading philosophers on the impressions of quality of faculty at the different departments. It’s very influential. It was originally created single-handedly by one philosopher, Brian Leiter.

      • The Stanford Encyclopedia of Philosophy, which functions as the go-to textbook within philosophy.

      • Two philosophers, David Bourget and David Chalmers, created a range of services. Philjobs is a job board for philosophy positions. PhilPapers is an index and bibliography of philosophy, and also runs a survey of philosophers’ beliefs. PhilEvents is a calendar of conferences and workshops.

      • Various surveys of journal rankings.

      • DailyNous and Leiter Reports, two blogs which aggregate news in the philosophical world.

    • Some fields have some limited amount of top-down control.

      • For example, the American Psychiatric Association defines key terms in the Diagnostic and Statistical Manual of Mental Disorders, which are widely accepted. I think it would be great if EA had some key defined terms like this. (I think this to an ever greater extent with AI safety.)

      • The climate physics and climate economics communities have the Intergovernmental Panel on Climate Change, which attempts to represent consensus views within these fields. I don’t see an obvious plausible analogue within EA. Something similar but massively toned-down, like an encyclopedia, could be very helpful.

  • Change in what philosophers work on, or how they operate, generally happens organically, as a result of many individuals’ decisions about what is important or how philosophy should be done.

    • There is sometimes explicit commentary on how philosophy should be done or what it should focus on, but when that’s influential, it’s usually because arguments have been made by people with a long established track record of excellent work. (For example, this from John Broome or this from Timothy Williamson.)

  • There’s an enormous amount of internal disagreement among philosophers. Analytic philosophy is defined much more by a methodology (clear, rigorous argument), a set of defining questions (free will, the nature of morality, etc), and an intellectual tradition, than by any particular set of views.

    • I think this is true in other areas of science, too, although the amount of disagreement is usually lower, and sometimes we really just know things and there’s not really a way to be a good scientist on the topic while having heterodox beliefs (e.g. believing in telekinesis, or that the Earth is only 6000 years old). I think the amount of agreement that should be expected within effective altruism should be closer to that within philosophy rather than within physics (which has a much larger body of very-high-confidence knowledge).

  • There aren’t strict membership conditions for being a philosopher. (For example, you don’t need to be employed by a University.)

    • Membership criteria exist in other fields, though, like medicine. Medicine also provides a nice distinction between being a researcher and being a clinician or practitioner, which ports over to effective altruism, too.

I’m not claiming that EA should exactly mirror the analytic philosophy community. And it would be a suspicious coincidence if it were the best model! I’m using it as an example for calibration — a concrete analogy of the level of centralisation we might want. In particular, reflection on it makes vivid to me the extent to which we can have community-wide services without centralisation, as a result of individuals noticing that some service isn’t being provided and setting something up to provide it.

On this broad view, what EA should aspire to be is not a club, a social movement, an identity, or an alliance of specific causes. And it should only be a community or a professional network in a broad sense. Instead, it should aspire to be more like a field — like the fields of philosophy, or medicine, or economics. [12]

Getting more specific

Given all the above, what are some more specific upshots? Here are some tentative suggestions.

First, there are some moves in the direction of decentralisation that seem very robustly good, and many of which are happening anyway:

Perception:

  • Reflect reality on how centralised we are.

    1. Inaccurate perceptions on this seem like all downside to me.

    2. Assuming I’m right that, currently, perception doesn’t match reality, it means the core projects and people in EA should communicate more about what they are and are not taking responsibility for.

      1. This post is trying to help with that!

      2. But more generally, now that EA is the size it is, I suspect it means that core projects and people will need to communicate some basic things about themselves many, many times, even though it’ll feel very repetitive to them.

  • Encourage a broader range of EA-affiliated public figures

    1. I’d love there to be a greater diversity of people who are well-known as EA-advocates, reflecting the intellectual, demographic and cultural diversity within the movement.

Funding:

  • Get more major donors.

    1. This would be a very clear win, though it’s hard to achieve.

    2. There are a handful of EA-aligned potential donors who might possibly become significant donors over the next few years. But there’s no one who I expect to be as major, in particular within EA movement-building, as OP.

  • Restart a regranters program

    1. This would have to be done by OP or some other major donor; it would give more power over funding decisions to more people.

  • More people donate more or earn to give

    1. One way this plays out is that, because OP aims to limit the amount they contribute to most organisations and in some cases has imposed limits on how much of the budget they want to support, funders donating to those organisations in effect can “reallocate” Open Phil funding towards those orgs.

    2. Of course, increasing funding diversity is only one consideration among very many when making career decisions!

Decision-making:

  • Some projects should spin out from EV

    1. Especially as projects grow in size, I think this makes sense from their perspective: it allows the projects to have greater autonomy. And it’ll have benefits across the EA movement, too.

    2. The various projects under EV have been thinking this through, and weighing the costs and benefits. My guess is that around half will ultimately spin out over the next year or two. If this happens, it seems like a positive development to me.

Culture:

  • Celebrate diversity

    • We could try to go further to emphasise that there are many conclusions that one could come to on the grounds of EA values and principles and celebrate cases where people pursue heterodox paths, as long as their actions are clearly non-harmful. This can be tough to do, because it means praising someone for taking what, in your view, is the wrong (in the sense of suboptimal) decision.

Then there are some steps I can personally take in the direction of decentralisation and that seem like clear wins to me. I plan to:

  • Step down from the board of Effective Ventures UK once we have more capacity. (I’m not currently sure on timelines for that. I’ll note I’ve also been recused from all decision-making relating to EV’s response to the FTX collapse.) I’ve been in the role for 11 years, and now feels like a natural time to move on. I think there are a lot of people who could do this role well, and me stepping down gives an opportunity for someone else to step up.

    • I think that this will move EA in a decentralised direction on the dimension of both perception and decision-making power.

  • Distance myself from the idea that I’m “the” face of EA. I’ve never thought of myself this way (let alone as “the leader”) and there have always been many high-profile EA advocates. But others, especially in the media, have sometimes portrayed or viewed me in this way. Trying to correct this will hopefully be a step in the direction of decentralisation on the perception and culture dimensions.

    • Implementing this in practice will be tricky: in particular, if a journalist is writing about me, they are incentivised to play up my importance to make their story or interview seem more interesting. But I’ll take the opportunities I can to make it explicit to people that I’m talking with. I’m going to avoid giving opening /​ closing talks at EAGs for the time being. I’m also going to try to provide even more support to other EA and EA-aligned public figures, and have spent a fair amount of time on that this year so far.

    • Prior to the WWOTF launch, I don’t think I’d appreciated the extent to which people saw me as “the” spokesperson, and then the magnitude of coverage around WWOTF made that issue more severe.

    • I think that this will be healthier for me, healthier for the movement, and more accurate, too. It doesn’t make sense for there to be a single spokesperson for EA, because EA is not a monolith, and there’s a huge diversity of views within the movement. If you want to read more discussion, I wrote a draft blog post, which I probably won’t publish beyond this, somewhat jokingly titled “Will MacAskill should not be the face of EA” (here), which explains some more of my thinking. [13]

There are some other changes in EA that would move in a decentralised direction that seem plausible to me, but where it’s less obvious, would need a lot more thought, and/​or the decision should be made by the head of the relevant organisation. In particular, often the decisions are clearly something that needs to wait for CEA’s next Executive Director. For example:

  • Rename CEA

    • The key argument here is that having an organisation called “Centre for Effective Altruism” suggests more top-down control than there is.

  • Rename the EA Forum [14]

    • At worst, the current name means that some people can (deliberately or unintentionally) claim that some post on the Forum “represents EA”.

    • But more generally, the name also suggests that the content on the Forum is more representative of EA than it really is. Whereas really the content on the Forum will form a biased sample of thought in a whole bunch of ways: it’ll heavily overrepresent people who are Extremely Online or who have strong views, and it’ll also just introduce randomness, as it’s pretty stochastic what topics happen to get written about at any particular time.

    • I’m also struggling to think of real benefits for it to have EA in the name. If it does get renamed, I want to make a semi-serious pitch for it to be called “MoreGood”.

  • Dissolve CEA into sub-projects

    • CEA does a lot of different things and it’s not super obvious why they should all operate within the same project.

    • Previously, EA Funds spun out from CEA, and that move has seemed pretty successful. Another more complicated example is Giving What We Can, which was separate, then merged with CEA, then separated again.

In the direction of greater centralisation, the things I find myself most excited about are projects that offer services to the wider movement (rather than trying to control the wider movement). These needn’t all be in one organisation, and there are some good reasons for thinking they could be in separate projects, or just run on the side by people. Here are some ideas:

  • A guide to what the EA movement is, answer lots of frequently-asked questions. (Analogy: guides to festivals.)

  • An organisation devoted to assessing, monitoring and reducing major risks to EA — ways in which EA could lose out on most of its value.

  • An EA leadership fast-track program, providing mentorship and opportunities to people who could plausibly enter senior positions at EA or EA-adjacent organisations in the future.

  • An EA journal or magazine that has an issue every three months for very high-quality content about EA or issues relevant to EA.

    • (At the moment, I feel the Forum system and blog culture incentivises large quantities of lower-quality content, rather than essays that have been worked on more intensively and iterated over the course of months).

  • An organisation that’s squarely and wholly focused on applied cause prioritisation research, with a particular eye to ways that EA might currently be misallocating time or money.

    • (Given the nature of EA as a project, it’s remarkable to me how little applied cause prioritisation research is done, in particular compared to how much outreach is done.)

  • An ongoing survey of the movement to gauge what other things should be on the above list.

Conclusion

This post has covered a lot of ground. I hope that, at least, the overview of how I see decision-making in EA actually working has been helpful. I’ve offered my thoughts about how decision-making in EA should evolve, but I’ll emphasise again that this issue is really tough: I’m confident I’ll have made errors, missed out important considerations, and I’m not at all confident that the upshots I’ve suggested are correct. But I think it’s an important conversation, at least, to have.

  1. ^

    I also want to emphasise that this post is just the product of some conversations and thinking; it’s not the output of some long research process. I’m sure that there’s a ton more than people with relevant experience, or domain experts on institutional design or evidence-based management could add, and could correct me on.

  2. ^

    This figure is approximate, from here. I looked at the “total funding 2012-2023 by known sources” chart, but subtracted out Future Fund funding, which isn’t relevant for the current state of play.

  3. ^

    A simple explanation for the discrepancy is just: People in core EA haven’t clearly explained, before, how decision-making in EA works. In the past (e.g. prior to 2020), EA was small enough that everyone could pick this sort of thing up just through organic in-person interaction. But then EA grew a lot over 2020-2021, and the COVID-19 pandemic meant that there was a lot less in-person interaction to help information flow. So the people arriving during this time, and during 2022, are having to guess at how things operate; in doing so, it’s natural to think of EA as being more akin to a company than it is, or at least for there to be more overarching strategic planning than there is. If this is right, then, happily, repeated online communication might help address this.


    A second, more complex and philosophical, explanation, which has at least some relevance to some aspects of the puzzle, needs us to distinguish between different senses of responsibility:

    1. Formal responsibility: You’re formally responsible for X if you’ve signed up to X.
    2. Interaction responsibility: You’re interaction-responsible for X if you’ve interacted with X in some way.
    3. Negative responsibility: You’re negatively responsible for X if you could alter X with your actions.

    To illustrate: You’re formally responsible for saving a child drowning in a shallow pond if you’re a lifeguard at the pond, or if you’ve waded in and said “I’ve got it covered”. You’re interaction-responsible for the child if you waded in and tried to start helping the child. You’re negatively responsible for the child simply if you could help the child in some way — for example, if you could wade in and make things better — even if a lifeguard is looking on, and even if others have already waded in and tried to help.

    (There are other generators of responsibility, too. There’s what we could call moral responsibility, for example if you deliberately pushed the child into the pond. Or causal responsibility, for example if you accidentally knocked the child into the pond. These are important, but not as relevant for the main issue I’m identifying.)

    I think that many EAs, especially core EAs, are likely to take both formal and negative responsibility unusually seriously. EAs tend to be very scrupulous about promises, which means they take formal responsibility particularly seriously. They also don’t place much weight on the acts/​omissions distinction, which means they take negative responsibility particularly seriously.

    This alone squeezes out “interaction” responsibility: if you place more weight on formal and negative responsibility, that means you have to place less weight on interaction-responsibility. But I think many EAs are also less likely to see interaction-responsibility as generating special obligations in and of itself, in the way that many in the wider world do. This is discussed at length in a couple of insightful and important posts, The Copenhagen Interpretation of Ethics by Jai and Asymmetry of Justice by Zvi Moshowitz.


    A final hypothesis concerns a notion of responsibility that’s in between formal and interaction responsibility, let’s call it blocking-responsibility. You’re blocking-responsible for X if, in virtue of trying to help with X, you’ve prevented or made it much harder for anyone else to help with X, and other people would be helping with X if you weren’t trying to help with X.

    For example, if you wade in and help the child, but in doing so prevent other people from helping the child, and other people would help the child if you didn’t, that generates something much more like formal responsibility than interaction-responsibility.

    It’s plausible to me that, often, onlookers perceive some organisation or person as signing up to “own” an issue (formal responsibility) or preventing others from helping on that issue (blocking-responsibility), when the organisation or person just sees themselves as trying to help, where the alternative is that no one helps (so they think they are interaction-responsible but not blocking-responsible).


    On either of the last two hypotheses, we end up with a dynamic where:
    1. Person Y helps with X, does an ok job.
    2. Onlooker is critical and annoyed, like “Why aren’t you doing X better in such-and-such a way?”
    3. Person Y is like, “Man, I’m just trying to do my best here; you’re giving me responsibilities that I never signed up for. The alternative is that to one does anything on X, and these criticisms are making that alternative more likely.”

    Onlooker feels either like they are trying to help, or that they are simply holding accountable people who’ve adopted positions of power. Person Y feels like not only have they taken on a cost in trying to help with X, but now they’re getting criticised for it, too!

    That’s all been pretty abstract, and I’ve been staying abstract because any particular instance will throw up a lot of additional issues. But I feel this dynamic comes up all the time, especially for things around “running the community”, and it doesn’t get called out because Person Y doesn’t want to appear defensive.

    I’m really worried about this dynamic: if we don’t address it, it means that Onlooker is unhappy because they feel like people in power aren’t doing a good enough job and they aren’t being listened to; it means that Person Y feels like they are having to pay the tax of dealing with criticism just for trying to help, and it makes them less likely to want to help at all. The article I linked to on do-ocracy has some nice examples of this dynamic, suggesting that this is a widespread phenomenon.

  4. ^

    I added “culture” late on in drafting this post. But the more I reflect on this, the bigger a deal I think it is. Burning Man is centralised in the sense that there’s a single organisation that runs it, but the culture it tries to cultivate at least aspires to be semi-anarchist. In EA, we see both decentralised and centralised cultural elements. It’s a decentralised culture insofar as, relative to many other cultures, it prizes independence of thought, and is open to contrarianism. It’s centralised insofar as people are often highly scrupulous, and can feel like they’re being a “bad EA” in some way if they aren’t acting in line with the wider group, and will be negatively judged. I think the highly critical culture, especially online, contributes to pressures towards conformity as a side-effect; people worry that if they say or do something different, they’ll get attacked. Personally, at least, I think that this latter aspect is one of the threads within EA culture I’d most like to see change.

  5. ^

    We can make “decision-making power” more precise by breaking it down into three sub-types. You can take action because someone else has told you do that action for a number of different reasons, including:

    Authority: When you do X because Y has told you to do X and because there’s some power relationship between you and Y (e.g. boss and employee) such that Y could and would inflict bad consequences on you (e.g. docked pay) if you don’t do X.
    Deference: When you do X because Y thinks you should do X, and you trust their judgement. You might not know or understand Y’s reasons behind wanting X to happen.
    Persuasion: When you do X because Y thinks you should do X, and convinces you with compelling reasons why doing X is a good idea.


    I think that EA, in practice, is fairly decentralised if we’re looking at Authority (it’s very rare that I see someone giving orders and others following those orders without at least broadly understanding and (at least to some extent) endorsing the reasons behind them), and in the middle on Deference and Persuasion (I think it’s fairly common for people to work on specific areas because they think that better-informed people than them think it’s important, even if they don’t wholly understand the reasons). In general, I would like more of a move towards Persuasion over Deference, but that move is not trivial: there are major benefits from division of intellectual labour, and a significant amount of intellectual division of labour is inevitable.

  6. ^

    Someone on the Forum made this point earlier in the year. I forget who, but thank you!

  7. ^

    This argument for free markets comes originally from The Use of Knowledge in Society by Friedrich Hayek (more here). I don’t know what the best source to learn about this is; a quick google suggests that this is helpful; GPT-4 also gives a reasonable overview.

  8. ^

    For more discussion of the EA marketplace analogy, see Michael Plant’s essay here, and comments.

  9. ^

    This was a significant issue in the earlier days of EA. See for example, this discussion of Intentional Insights.

  10. ^

    When I was getting to grips with climate economics, it was striking to me how long the reliance on integrated assessment models had persisted, despite how inadequate they seemed to be. One explanation I heard was founder effects: Bill Nordhaus was the first serious economist to produce seminal work on climate change, and pioneered integrated assessment models. That resulted in a sort of intellectual lock-in .

  11. ^

    Of course, EA is defined by a particular mindset, set of interests, and moral and methodological views, so it can’t be open to any set of beliefs. (Trivially: if you want to maximise suffering, you don’t have a place in EA.) It’s a hard question what we should lock in as definitional of EA, and what we shouldn’t. I presented my earlier attempt at this in my article on the definition of effective altruism (which received significant help in particular from Julia Wise and Rob Bensinger) and in CEA’s guiding principles, which I helped with.

  12. ^

    For more on what constitutes a field, here’s an edited take from GPT-4, which I think is pretty good: “A “field” can be defined as a specific area of knowledge or expertise that is studied or worked in. It’s an area that has its own set of concepts, practices, and methodologies, and often has its own community of scholars or practitioners who contribute to its development.


    Fields are often characterised by their methods, by a body of knowledge within them, by a community of scholars or practitioners who contribute to the field, by institutions and organisations that support that community, and by a set of goals and values.”

    This thought seems continuous with how CEA’s comms team is thinking about things.

  13. ^

    In footnote 5 I distinguish between different sorts of decision-making influence. What I’m aiming towards is trying to reduce the amount of Authority I have, and try to discourage Deference.

  14. ^

    Some people who gave comments thought that this name is actually a way in which EA is decentralised—because anyone can comment and influence how EA is perceived. But it seems to me like it at least increases the extent to which third parties see EA as A Single Thing. In analogy, if either of Leiter Reports or Daily Nous (the two main philosophy blogs) were called “The Analytic Philosophy Forum”, that would seem like a move in the direction of centralisation to me, at least on the Perception dimension. But perhaps this is just a case where it’s not clear what “centralised” vs “decentralised” means.