Decision-making and decentralisation in EA
This post is a slightly belated contribution to the Strategy Fortnight. It represents my personal takes only; I’m not speaking on behalf of any organisation I’m involved with. For some context on how I’m now thinking about talking in public, I’ve made a shortform post here. Thanks to the many people who provided comments on a draft of this post.
Intro and Overview
How does decision-making in EA work? How should it work? In particular: to what extent is decision-making in EA centralised, and to what extent should it be centralised?
These are the questions I’m going to address in this post. In what follows, I’ll use “EA” to refer to the actual set of people, practices and institutions in the EA movement, rather than EA as an idea.
My broad view is that EA as a whole is currently in the worst of both worlds with respect to centralisation. We get the downsides of appearing (to some) like a single entity without the benefits of tight coordination and clear decision-making structures that centralised entities have.
It’s hard to know whether the right response to this is to become more centralised or less. In this post, I’m mainly hoping just to start a discussion of this issue, as it’s one that impacts a wide number of decisions in EA. [1] At a high level, though, I currently think that the balance of considerations tends to push in favour of decentralisation relative to where we are now.
But centralisation isn’t a single spectrum, and we can break it down into sub-components. I’ll talk about this in more depth later in the post, but here are some ways in which I think EA should become more decentralised:
Perception: At the very least, wider perception should reflect reality on how (de)centralised EA is. That means:
Core organisations and people should communicate clearly (and repeatedly) about their roles and what they do and do not take ownership for. (I agree with Joey Savoie’s post, which he wrote independently of this one.)
We should, insofar as we can, cultivate a diversity of EA-associated public figures.
[Maybe] The EA Forum could be renamed. (Note that many decisions relating to CEA will wait until it has a new executive director).
[Maybe] CEA could be renamed. (This is suggested by Kaleem here.)
Funding: It’s hard to fix, but it would be great to have a greater diversity of funding sources. That means:
Recruiting more large donors.
Some significant donor or donors start a regranters program.
More people pursue earning to give, or donate more (though I expect this “diversity of funding” consideration to have already been baked-in to most people’s decision-making on this). Luke Freeman has a moving essay about the continued need for funding here.
Decision-making:
Some projects that are currently housed within EV could spin out and become their own legal entities. The various different projects within EV have each been thinking through whether it makes sense for them to spin out. I expect around half of the projects will ultimately spin out over the coming year or two, which seems positive from my perspective.
[Maybe] CEA could partly dissolve into sub-projects.
Culture:
We could try to go further to emphasise that there are many conclusions that one could come to on the grounds of EA values and principles, and celebrate cases where people pursue heterodox paths (as long as their actions are clearly non-harmful).
Here are some ways in which I think EA could, ideally, become more centralised (though these ideas crucially depend on someone taking them on and making them happen):
Information flow:
Someone could create a guide to what EA is, in practice: all the different projects, and the roles they fill, and how they relate to one another.
Someone could create something like an intra-EA magazine, providing the latest updates and featuring interviews with core EAs.
Someone could take on a project of consolidating the best EA content and ideas, for example into a quarterly journal.
Provision of other services that benefit the EA ecosystem as a whole:
Someone could set up an organisation or a team that’s explicitly taking on the task of assessing, monitoring and mitigating ways in which EA faces major risks, and could thereby fail to provide value to the world, or even cause harm.
Someone could set up a leadership fast-track program.
And here are a couple of ways in which things are already highly decentralised, and in my view shouldn’t change:
Ownership:
No-one owns “EA” as a brand, or its core ideas.
Group membership:
Anyone can call themselves a part of the EA movement.
Thinking through the issue of decentralisation has also led me to plan to make some changes to how I operate in a decentralised direction:
Decision-making:
I plan to step down from the board of Effective Ventures UK once we have more capacity.
Perception:
I plan to go further to distance myself from the idea that I’m “the face” of EA, or a spokesperson for all of EA. (This hasn’t been how I’ve ever seen myself, but is how I’m sometimes perceived.)
In a being-helpful-where-I-can way (rather than “taking-ownership-for-this-thing” way), I’m also spending some time trying to bring in new donors, and help support other potential public figures. I’m not doing anything, for now, in the direction of further centralisation.
A final caveat I’ll make on all the above is that this is how I see things for now. The question of centralisation is super hard, and what makes sense will change depending on the circumstances of the time. Early EA (prior to ~2015) was notably less centralised than it was after that point, and I think that at that time increased centralisation was a good thing. In the future, I’m sure there’ll be further changes that will make sense, too, in both decentralised and centralised directions.
The rest of this post is structured as follows:
First, I give an overview of how decision-making currently works in EA, as it seems to me.
Finally, I get into specifics of things that could or should change.
How decision-making works in EA
A number of people have commented on the Forum that they don’t feel they understand how decision-making works in EA, and I’ve sometimes seen misinformation floating around; this confusion is often about how centralised EA is.
So I’m going to try to clarify things a bit. It’s tough to describe the situation exactly, because the reality is a middle ground between a highly centralised decision-making entity like a company and complete anarchy. And where exactly EA lies between those two extremes often depends on what exactly we’re talking about.
Anyway, here goes. Some ways in which the EA movement is centralised:
A single funder (Open Philanthropy, “OP”) allocates the large majority (around 70%[2]) of funding that goes to EA movement-building. If you want to do an EA movement-building project with a large budget ($1m/yr or more), you probably need funding from OP, for the time being at least. Vaidehi Agarwalla’s outstandingly helpful recent post gives more information.
Effective Ventures US and UK (“EV”) currently house the majority of EA movement-building work.
The senior figures in EA are in fairly regular communication with each other (though there’s probably less UK<>US communication than there should be).
It’s not totally determinate who is a “senior figure”, and it varies over time, but the current list of people would at least include: Nick Beckstead, Alexander Berger, Max Dalton, Holden Karnofsky, Howie Lempel, Brenton Mayer, Tasha McCauley, Toby Ord, Lincoln Quirk, Nicole Ross, Eli Rose, Zach Robinson, James Snowden, Ben Todd, Ben West, Claire Zabel, and me. All of these people have had or currently have positions at OP or senior positions at EV.
Usually, there’s an annual meeting, the Coordination Forum (formerly called the “Leaders’ Forum”), usually of around 30 people, which is run by CEA largely as an un-conference, for senior or core people. This year, there hasn’t been an equivalent so far, but there will probably be one later in the year.
Normally, before someone embarks on a major project, they get feedback from a wide variety of people on the project, and there’s a culture of not taking “unilateralist” action if most other people think that the project is harmful, even if it seems good to the person considering it. (Ideally, in a binary choice and given a number of assumptions, one pursues the action if it’s positive expected value on the median estimate of the action’s expected value among the people assessing it. It’s debatable the extent to which this rule is followed in practice in EA, or the extent to which the simple models in that paper are good guides to reality.)
Some ways in which EA is decentralised:
There’s no one, and no organisation, who conceives of themselves as taking ownership of EA, or as being responsible for EA as a whole.
CEA doesn’t see itself in this way. For example, here it says, “We do not think of ourselves as having or wanting control over the EA community. We believe that a wide range of ideas and approaches are consistent with the core principles underpinning EA, and encourage others to identify and experiment with filling gaps left by our work.”
EV doesn’t see itself in this way, and it includes projects that don’t consider themselves to be part of the EA movement or engaged in EA movement-building (such as Centre for the Governance of AI, Longview Philanthropy, and Wytham Abbey.)
The partial exception to this is CEA’s community health team, on issues of misconduct in the community, though even there they are well aware of the limited amount of control they have.
There is no trademark on “effective altruism.” Anyone can start a project that has “effective altruism” in the name.
There’s no requirement for EA organisations to be affiliated with Effective Ventures, and many aren’t, such as Rethink Priorities, the Global Challenges Project and some country-level organisations such as Effective Altruism UK.
There are a number of distinct core EA projects (CEA, 80,000 Hours, Giving What We Can, Rethink Priorities, Global Challenges Project, etc.) that make independent strategic plans.
There’s no CEO or “leadership team” of EA. There aren’t any formal roles that would be equivalent to C-level executives at a company. It’s vague who counts as a “senior EA”.
Across Effective Ventures US and UK, for example, in practice decision-making is currently shared between two boards, two CEOs, and the CEO or Executive Director of every project within the legal entities (e.g. CEA, 80,000 Hours, Giving What We Can, EA Funds, Centre for the Governance of AI, etc), who develop their projects’ annual plans and strategy, including making many of the most important decisions relevant to the movement as a whole (e.g. how to do marketing, and which target audience to have).
There are a number of what in absolute terms are major donors, as well as a diversity of funding opportunities from places like EA Funds and the Survival and Flourishing Fund. They are generally very keen to fund things that they think OP is overlooking.
Generally, I find there’s a very positive attitude among senior EAs for competition within the EA ecosystem.
The Global Challenges Project is illustrative. Emma Abele and James Aung thought that CEA was doing a suboptimal job with (some) student groups. So they set up their own project, got funding from Open Philanthropy, and did a great job.
Similarly, Probably Good was set up as being (in some ways) a competitor to 80,000 Hours, because the founders thought that 80,000 Hours was lacking in some important ways; it has received support from Open Philanthropy and encouragement from 80,000 Hours.
In general, coordination is pretty organic and informal, and happens in one of two ways:
People or organisations come up with plans, proactively get feedback on their plans, get told the ways in which their plans are good or bad, and they revise them.
Someone (or some people) have an idea that they think should exist in the world, and then shop it around to see if someone wants to take it on.
Overall, the best analogy I can think of is that EA functions as a “do-ocracy”. Here is a short article on do-ocracy, which is well worth reading. A slogan to define do-ocracy, which I like, is: “If you want something done, do it, but remember to be excellent to each other when doing so.” (Where, within EA, the ‘be excellent’ caveat covers non-unilateralism and taking externalities across the movement seriously.) I think this both represents how EA actually works, and how most senior EAs understand it as working.
I think that the main way EA departs from being a do-ocracy is that many people might not perceive it that way (very naturally—because it hasn’t yet been publicly defined that way); there’s a culture where sometimes people feel afraid of unilateralism, even in cases where that fear doesn’t make sense. If that’s true, it means that some people don’t do things because they feel they aren’t “allowed” to, or perhaps because they think that someone else has responsibility, or has figured it all out.
Compared to a highly-centralised entity like a company, the semi-decentralised / do-ocracy nature of EA has a few important upshots. This is the part of the post I feel most nervous about writing, because I’m worried that others will interpret this as me (and other “EA leaders”) disavowing responsibility; I’m already anxiously visualising criticism on this basis. But it seems both important and true to me, so I still want to convey it. The upshots are:
If something bad happens, it’s natural to look for who is formally responsible for the problem. (And, in a company, there’s always someone who is ultimately formally responsible: responsibility bottoms out with the CEO). But, often, the answer is that there’s no one who was formally responsible, and no one who was formally responsible for making sure that someone was formally responsible.
It’s difficult for calls along the lines of, “Something should be done about X”, or “EA should do Y” to have traction, unless the call to action is targeted at some particular person or project, because there’s no one who’s ultimately in charge of EA, and who is responsible for generally making the whole thing go well. (See Lizka Vaintrob’s excellent post on this here).
The reason for something happening or not happening is often less deep than one might expect, boiling down to “someone tried to make it happen” or “no one tried to make it happen”, rather than “this was the result of some carefully considered overarching strategy”. Moreover, the list of things it would be good to do is very long, and the bottleneck is normally there being someone with the desire, ability and spare capacity to take it on.
Thoughts of “I’m sure this is the way it is because some more well-informed people have figured it out” are often incorrect, especially about things that aren’t happening.
I get the sense that the above points mark a major difference in how many people who work for core EA orgs see decision-making in EA working, and how it’s perceived by some in the wider community. I have some speculative hypotheses about why there’s this discrepancy, but it’s a big digression so I’ve put it into a footnote. [3]
When thinking about how centralised or not EA is, or should be, it can be helpful to have in mind concrete potential analogies, and the strengths and weaknesses they have. Here’s a spectrum of organisations, in descending order from more to less centralised (as it seems to me):
communist dictatorships (e.g. North Korea)
the US army
most companies (e.g. Apple)
highly centralised religious groups (e.g. Mormonism)
franchises (e.g. McDonald’s)
the Scouts
mixed economies (the US, UK)
registered clubs and sports groups (e.g. The United States Golf Association; USA Basketball)
intergovernmental decision-making
fairly decentralised religious groups (e.g. Protestantism, Buddhism)
most social movements (e.g. British Abolitionism, the American Civil Rights Movement)
the scientific community
most intellectual movements (e.g. behaviourism)
the US startup scene
This is highly subjective, but it seems to me the overall level of centralisation within EA is currently similar to fairly decentralised religious groups, and many social movements.
It can also be helpful to break down “centralisation” into sub-dimensions, such as:
Decision-making power: To what extent is what the group as a whole does determined by a small group of decision-makers?
Are these decision-making structures formal or informal?
Do these decision-makers have control over resources, including financial resources?
Who is accountable for success or failure? Are these accountability mechanisms formal or informal?
Ownership: Is there legal ownership of constitutive aspects of the group (e.g. intellectual property, branding)?
Group membership: How strong is the ability to determine membership in the group: How hard is it for someone in the group to leave? How hard is it for someone outside of the group to enter? And how tightly-defined is group membership?
Are there formal mechanisms for doing this, or merely informal?
Information flow: To what extent does information flow merely from decision-makers down to other group members, and to what extent does it flow back up to decision-makers, or horizontally from one non-decision-maker to another?
Culture: Do people within the group feel empowered to think and act autonomously, or do they feel they ought to defer to the views of high-status individuals within the group, or to the majority view within the group? [4]
On these dimensions, it seems to me that EA is currently fairly decentralised on group membership and information flow, very decentralised on ownership, and in the middle on decision-making power [5] and culture.
Should EA be more or less centralised?
At the moment, it seems to me we’re in the worst of both worlds, where many people think that EA is highly centralised, whereas really it’s in-between. We get the downsides of appearing (to some) like one entity without the benefits of tight coordination. For many issues, there’s a risk that people generally feel that the “central” groups and people will be in charge of all issues impacting EA and so there’s no need to do anything about any gaps they perceive, even when that’s not the case.
I’ll talk more about specific ways EA could centralise or decentralise in the next section. If we were going broadly in the direction of further centralisation, then, for example, CEA could explicitly consider itself as governing the community, and explicitly take on more roles. Going further in that direction, there could even be a membership system for being part of EA, like the Scouts has. If we were going broadly in the direction of further decentralisation, then CEA could change its name and perhaps separate into several distinct projects, some more projects could spin out of Effective Ventures, and we could all more loudly communicate that EA is a decentralised movement and cultivate a decentralised culture.
I’ll give the broad case both for and against further centralisation or decentralisation, and then get into specifics.
The broad case for further centralisation includes (in no particular order):
There are some issues or activities that concern the community as a whole, or where there are major positive / negative externalities, or natural monopolies. These include:
The handling of bad actors within EA, who can cause harm to the whole of the movement.
Infohazards (e.g. around bio x-risk).
Issues that impact on EA’s brand. For example, whether to associate with a very public new donor, or whether to run a public EA campaign.
Given the ubiquity of fat-tailed distributions, semi-centralisation is almost inevitable. Wealth is heavily fat-tailed, so it’s very likely that one or a small number of funders end up accounting for most funding. [6] Similarly, fame (measured by things like number of social media followers, media mentions, or books sold) also seems to be fat-tailed, so it’s likely that one or a small number of people will end up accounting for most of the attention that goes towards specific people. We can try to combat this, but we’ll be fighting against strong forces in the other direction.
The nonprofit world is very unlike a marketplace. Crucially, there isn’t a price mechanism which can aggregate decentralised information and indicate how the provision of goods and services should be prioritised and thereby incentivise the production of goods and services that are most needed. [7] So common arguments within economics that, under some conditions, favour something like market competition, don’t cleanly port over. [8]
Centralisation can enable greater control over the movement in potentially-desirable ways. (Somewhat analogously, governments can help control an economy by printing money, setting interest rates, and so on.)
For example, as movements grow, there’s a risk that their ideals become diluted over time, regressing to the mean of wider society’s views. Centralisation can be a way of preventing or slowing that tendency; perhaps the ideal growth rate for EA is faster or slower than the “organic” growth rate.
In the absence of coordination, some projects might get started, or continue, for “unilateralist’s curse” type reasons: naturally, there will be a range of assessments of how good a potential or existing project is, and in the absence of coordination (or at least information-sharing), those who think the project is best will go ahead with it, even if it’s overall a bad idea.
Centralisation can help enforce quality control, preventing low-integrity or low-quality projects from damaging the wider public’s perception of EA. [9]
Decentralisation risks redundancy, with multiple people working on very similar projects. Centralisation gets benefits from economies of scale — there are certain things you only need to do or figure out once (e.g. setting up a legal entity, having accounting, legal, HR departments (etc)).
No matter how the EA movement is structured, onlookers will often treat it as a single entity, interpreting actions from any one person or organisation as representative of the whole.
It seems harder for a decentralised movement to centralise than it is for a centralised movement to decentralise. So, trying to be as centralised as possible at the moment preserves option value.
The broad case for further decentralisation includes (in no particular order):
People in EA are doing a wide variety of things, and it’s hard for one organisation to speak to and satisfy all the different sub-cultures within EA at once. There are very different needs and interests from, for example, student activists, academics, people working in national security, old-time rationalists, major philanthropists, etc, and among people working in different cause areas.
Relatedly, decentralised decision-making benefits from local knowledge. The way EA should be thought about or communicated across causes and countries will be very different; decisions about how EA should be adapted to those contexts are probably best done by people with the most knowledge about those contexts.
Even if the nonprofit world is significantly unlike a for-profit marketplace, there are still good arguments for thinking that competition can be highly beneficial, resulting in better organisations and products. This is both because (i) competition means that people can choose the better service; (ii) competition incentivises better service provision among the competitors. In contrast, centrally-planned groups are often slow-moving, bureaucratic, and ineffective.
Any centralised entity would be very unlike a government. It couldn’t forcibly tax its members, or enforce its policies through its own legal system. So common arguments within economics and political science that, under some conditions, favour something like government action, don’t cleanly port over.
Most activities within EA don’t concern the community as a whole, or have major positive / negative externalities, or natural monopolies.
Centralisation can be less empowering. Suppose that there’s some activity X that would be well worth doing, and benefit all of EA, but the central entities haven’t done it (for bad reasons). Then, if the widespread understanding is that the movement is centralised, X just won’t happen: other parties will believe that the central entities have got it covered.
Centralisation is more fragile in some ways. If, for example, there was only one EA organisation, then the collapse of that one organisation would mean the collapse of EA as a whole.
There’s a risk that EA ossifies in thought, becoming locked-in to a certain set of founding beliefs or focuses. In particular, if there’s a set of early highly influential thinkers, and the views of those early thinkers become the default such that it’s much harder for the movement as a whole to reason away from those views, then, in the likely event that those early thinkers are mistaken in some important ways, that would be very bad. This risk could be especially likely if people who aren’t sympathetic to those particular beliefs are more likely to bounce off the movement, so the movement becomes disproportionately populated with people sympathetic to those beliefs. Centralisation might increase this risk.
This seems to happen in science. Max Planck famously quipped that science advances “one funeral at a time” and some recent evidence (which I haven’t vetted) suggests that’s correct.[10]
And it often seems to happen in other social and intellectual movements, too.[11]
The tractability of further centralisation seems low. This is for a few reasons:
If there’s some central grand plan for how EA should be, if some people disagree with that plan, there’s not really much in the way of enforcement that a central body can do. At the moment, people can’t get fired or kicked out of EA: they can get disinvited to EA events, not-funded by groups that agree, removed from the EA Forum, and information about them being a bad actor can be percolated, but that’s not necessarily enough to prevent them continuing. And these actions would seem harsh as a response to someone simply disagreeing with a strategic plan. Ultimately, if some person, organisation or group wants to do something and call it EA, they just can. This means that centralisation efforts risk being toothless.
One could try to change this, for example by having a “membership” system like many political parties have and some advocacy groups (e.g. the Sierra Club, or the NAACP) have. But I think that, even if that seemed desirable, trying to implement that seems extremely hard.
It’s hard to see who would lead a centralisation effort. They’d need to have a combination of ability, desire and legitimacy within the movement, without it also being the case that it’s more important for them to work on something else.
Of these, the biggest considerations in favour of centralisation, in my view, are option value and the handling of bad actors. The biggest considerations in favour of decentralisation are worries about ossification and lock-in, the benefits of competition, and, above all, that I think the tractability of further centralisation seems low.
As I mentioned at the outset, there’s not a single spectrum of centralisation to decentralisation, and I’ll get into specifics in the next section. Overall, I think the arguments on average broadly tend towards further decentralisation rather than centralisation. But I’m still very unsure: there are tough tradeoffs here. If centralised, you get fewer bad projects but fewer good projects, too; you get less redundancy but less innovation. So, even though I’m broadly in favour of further decentralisation, if there was, for example, a new Executive Director of CEA or someone at Open Philanthropy who really wanted to take the mantle on, and could build the legitimacy needed to pull it off, I’d be interested to see them experiment with centralisation in some areas and see how that goes.
Going back to the list of comparisons: I feel like the level of decentralisation in the scientific community or intellectual movements are in the vein of what we should aim for. The analogy I like best, at the moment, is with specific scientific/academic communities. I know most about the analytic philosophy community. Here are some notable aspects of that community, where I think the analogy is helpful (feel free to skip the sub-bullets if you aren’t interested in the details. I’m also not claiming that we should emulate the analytic philosophy community, just that it’s an interesting analogue in terms of level of (de)centralisation):
Centralised bodies tend to take the form of provision of services rather than top-down control. They tend to arise because some person or group has unilaterally offered them and they’ve had widespread adoption. Often, there are different groups offering the same services.
The closest thing to a centralised body in analytic philosophy is The American Philosophical Association. What they do is limited, though, and as a philosopher you rarely interact with them or think about them; they aren’t a very powerful force within the field of analytic philosophy.
It runs what I believe are the three largest philosophy conferences. First-round interviews for US tenure-track philosophy jobs are usually held at one of these conferences.
It provides some grants, fellowships, and funds.
It provides some online resources, too, although they don’t seem very influential.
I think it used to host adverts for jobs in philosophy, but then PhilJobs did the same thing but better so they now use PhilJobs.
Some other examples of “centralised” services in philosophy:
Journals. Nowadays, their key role is to act as quality-stamps on philosophical output. The prestige of different journals is generally well-known, and publication in a particular journal is understood as a way of (i) indicating to other philosophers that this piece of work might be worth looking at; (ii) providing evidence of the quality of a philosopher’s work for hiring committees and tenure committees.
Different journals are run by different groups, traditionally by universities or publishers. More recently, Philosophers’ Imprint was founded by two philosophers who thought they could create an online and open-access journal that was better-run than existing journals, and it’s been very successful.
The Philosophical Gourmet Report ranks graduate programs in philosophy, by surveying leading philosophers on the impressions of quality of faculty at the different departments. It’s very influential. It was originally created single-handedly by one philosopher, Brian Leiter.
It has some competitors, such as the Pluralist’s Guide to Philosophy.
The Stanford Encyclopedia of Philosophy, which functions as the go-to textbook within philosophy.
Two philosophers, David Bourget and David Chalmers, created a range of services. Philjobs is a job board for philosophy positions. PhilPapers is an index and bibliography of philosophy, and also runs a survey of philosophers’ beliefs. PhilEvents is a calendar of conferences and workshops.
Various surveys of journal rankings.
DailyNous and Leiter Reports, two blogs which aggregate news in the philosophical world.
Some fields have some limited amount of top-down control.
For example, the American Psychiatric Association defines key terms in the Diagnostic and Statistical Manual of Mental Disorders, which are widely accepted. I think it would be great if EA had some key defined terms like this. (I think this to an ever greater extent with AI safety.)
The climate physics and climate economics communities have the Intergovernmental Panel on Climate Change, which attempts to represent consensus views within these fields. I don’t see an obvious plausible analogue within EA. Something similar but massively toned-down, like an encyclopedia, could be very helpful.
Change in what philosophers work on, or how they operate, generally happens organically, as a result of many individuals’ decisions about what is important or how philosophy should be done.
There is sometimes explicit commentary on how philosophy should be done or what it should focus on, but when that’s influential, it’s usually because arguments have been made by people with a long established track record of excellent work. (For example, this from John Broome or this from Timothy Williamson.)
There’s an enormous amount of internal disagreement among philosophers. Analytic philosophy is defined much more by a methodology (clear, rigorous argument), a set of defining questions (free will, the nature of morality, etc), and an intellectual tradition, than by any particular set of views.
I think this is true in other areas of science, too, although the amount of disagreement is usually lower, and sometimes we really just know things and there’s not really a way to be a good scientist on the topic while having heterodox beliefs (e.g. believing in telekinesis, or that the Earth is only 6000 years old). I think the amount of agreement that should be expected within effective altruism should be closer to that within philosophy rather than within physics (which has a much larger body of very-high-confidence knowledge).
There aren’t strict membership conditions for being a philosopher. (For example, you don’t need to be employed by a University.)
Membership criteria exist in other fields, though, like medicine. Medicine also provides a nice distinction between being a researcher and being a clinician or practitioner, which ports over to effective altruism, too.
I’m not claiming that EA should exactly mirror the analytic philosophy community. And it would be a suspicious coincidence if it were the best model! I’m using it as an example for calibration — a concrete analogy of the level of centralisation we might want. In particular, reflection on it makes vivid to me the extent to which we can have community-wide services without centralisation, as a result of individuals noticing that some service isn’t being provided and setting something up to provide it.
On this broad view, what EA should aspire to be is not a club, a social movement, an identity, or an alliance of specific causes. And it should only be a community or a professional network in a broad sense. Instead, it should aspire to be more like a field — like the fields of philosophy, or medicine, or economics. [12]
Getting more specific
Given all the above, what are some more specific upshots? Here are some tentative suggestions.
First, there are some moves in the direction of decentralisation that seem very robustly good, and many of which are happening anyway:
Perception:
Reflect reality on how centralised we are.
Inaccurate perceptions on this seem like all downside to me.
Assuming I’m right that, currently, perception doesn’t match reality, it means the core projects and people in EA should communicate more about what they are and are not taking responsibility for.
This post is trying to help with that!
But more generally, now that EA is the size it is, I suspect it means that core projects and people will need to communicate some basic things about themselves many, many times, even though it’ll feel very repetitive to them.
Encourage a broader range of EA-affiliated public figures
I’d love there to be a greater diversity of people who are well-known as EA-advocates, reflecting the intellectual, demographic and cultural diversity within the movement.
Funding:
Get more major donors.
This would be a very clear win, though it’s hard to achieve.
There are a handful of EA-aligned potential donors who might possibly become significant donors over the next few years. But there’s no one who I expect to be as major, in particular within EA movement-building, as OP.
Restart a regranters program
This would have to be done by OP or some other major donor; it would give more power over funding decisions to more people.
More people donate more or earn to give
One way this plays out is that, because OP aims to limit the amount they contribute to most organisations and in some cases has imposed limits on how much of the budget they want to support, funders donating to those organisations in effect can “reallocate” Open Phil funding towards those orgs.
Of course, increasing funding diversity is only one consideration among very many when making career decisions!
Decision-making:
Some projects should spin out from EV
Especially as projects grow in size, I think this makes sense from their perspective: it allows the projects to have greater autonomy. And it’ll have benefits across the EA movement, too.
The various projects under EV have been thinking this through, and weighing the costs and benefits. My guess is that around half will ultimately spin out over the next year or two. If this happens, it seems like a positive development to me.
Culture:
Celebrate diversity
We could try to go further to emphasise that there are many conclusions that one could come to on the grounds of EA values and principles and celebrate cases where people pursue heterodox paths, as long as their actions are clearly non-harmful. This can be tough to do, because it means praising someone for taking what, in your view, is the wrong (in the sense of suboptimal) decision.
Then there are some steps I can personally take in the direction of decentralisation and that seem like clear wins to me. I plan to:
Step down from the board of Effective Ventures UK once we have more capacity. (I’m not currently sure on timelines for that. I’ll note I’ve also been recused from all decision-making relating to EV’s response to the FTX collapse.) I’ve been in the role for 11 years, and now feels like a natural time to move on. I think there are a lot of people who could do this role well, and me stepping down gives an opportunity for someone else to step up.
I think that this will move EA in a decentralised direction on the dimension of both perception and decision-making power.
Distance myself from the idea that I’m “the” face of EA. I’ve never thought of myself this way (let alone as “the leader”) and there have always been many high-profile EA advocates. But others, especially in the media, have sometimes portrayed or viewed me in this way. Trying to correct this will hopefully be a step in the direction of decentralisation on the perception and culture dimensions.
Implementing this in practice will be tricky: in particular, if a journalist is writing about me, they are incentivised to play up my importance to make their story or interview seem more interesting. But I’ll take the opportunities I can to make it explicit to people that I’m talking with. I’m going to avoid giving opening / closing talks at EAGs for the time being. I’m also going to try to provide even more support to other EA and EA-aligned public figures, and have spent a fair amount of time on that this year so far.
Prior to the WWOTF launch, I don’t think I’d appreciated the extent to which people saw me as “the” spokesperson, and then the magnitude of coverage around WWOTF made that issue more severe.
I think that this will be healthier for me, healthier for the movement, and more accurate, too. It doesn’t make sense for there to be a single spokesperson for EA, because EA is not a monolith, and there’s a huge diversity of views within the movement. If you want to read more discussion, I wrote a draft blog post, which I probably won’t publish beyond this, somewhat jokingly titled “Will MacAskill should not be the face of EA” (here), which explains some more of my thinking. [13]
There are some other changes in EA that would move in a decentralised direction that seem plausible to me, but where it’s less obvious, would need a lot more thought, and/or the decision should be made by the head of the relevant organisation. In particular, often the decisions are clearly something that needs to wait for CEA’s next Executive Director. For example:
Rename CEA
The key argument here is that having an organisation called “Centre for Effective Altruism” suggests more top-down control than there is.
Rename the EA Forum [14]
At worst, the current name means that some people can (deliberately or unintentionally) claim that some post on the Forum “represents EA”.
But more generally, the name also suggests that the content on the Forum is more representative of EA than it really is. Whereas really the content on the Forum will form a biased sample of thought in a whole bunch of ways: it’ll heavily overrepresent people who are Extremely Online or who have strong views, and it’ll also just introduce randomness, as it’s pretty stochastic what topics happen to get written about at any particular time.
I’m also struggling to think of real benefits for it to have EA in the name. If it does get renamed, I want to make a semi-serious pitch for it to be called “MoreGood”.
Dissolve CEA into sub-projects
CEA does a lot of different things and it’s not super obvious why they should all operate within the same project.
Previously, EA Funds spun out from CEA, and that move has seemed pretty successful. Another more complicated example is Giving What We Can, which was separate, then merged with CEA, then separated again.
In the direction of greater centralisation, the things I find myself most excited about are projects that offer services to the wider movement (rather than trying to control the wider movement). These needn’t all be in one organisation, and there are some good reasons for thinking they could be in separate projects, or just run on the side by people. Here are some ideas:
A guide to what the EA movement is, answer lots of frequently-asked questions. (Analogy: guides to festivals.)
An organisation devoted to assessing, monitoring and reducing major risks to EA — ways in which EA could lose out on most of its value.
An EA leadership fast-track program, providing mentorship and opportunities to people who could plausibly enter senior positions at EA or EA-adjacent organisations in the future.
An EA journal or magazine that has an issue every three months for very high-quality content about EA or issues relevant to EA.
(At the moment, I feel the Forum system and blog culture incentivises large quantities of lower-quality content, rather than essays that have been worked on more intensively and iterated over the course of months).
An organisation that’s squarely and wholly focused on applied cause prioritisation research, with a particular eye to ways that EA might currently be misallocating time or money.
(Given the nature of EA as a project, it’s remarkable to me how little applied cause prioritisation research is done, in particular compared to how much outreach is done.)
An ongoing survey of the movement to gauge what other things should be on the above list.
Conclusion
This post has covered a lot of ground. I hope that, at least, the overview of how I see decision-making in EA actually working has been helpful. I’ve offered my thoughts about how decision-making in EA should evolve, but I’ll emphasise again that this issue is really tough: I’m confident I’ll have made errors, missed out important considerations, and I’m not at all confident that the upshots I’ve suggested are correct. But I think it’s an important conversation, at least, to have.
- ^
I also want to emphasise that this post is just the product of some conversations and thinking; it’s not the output of some long research process. I’m sure that there’s a ton more than people with relevant experience, or domain experts on institutional design or evidence-based management could add, and could correct me on.
- ^
This figure is approximate, from here. I looked at the “total funding 2012-2023 by known sources” chart, but subtracted out Future Fund funding, which isn’t relevant for the current state of play.
- ^
A simple explanation for the discrepancy is just: People in core EA haven’t clearly explained, before, how decision-making in EA works. In the past (e.g. prior to 2020), EA was small enough that everyone could pick this sort of thing up just through organic in-person interaction. But then EA grew a lot over 2020-2021, and the COVID-19 pandemic meant that there was a lot less in-person interaction to help information flow. So the people arriving during this time, and during 2022, are having to guess at how things operate; in doing so, it’s natural to think of EA as being more akin to a company than it is, or at least for there to be more overarching strategic planning than there is. If this is right, then, happily, repeated online communication might help address this.
A second, more complex and philosophical, explanation, which has at least some relevance to some aspects of the puzzle, needs us to distinguish between different senses of responsibility:1. Formal responsibility: You’re formally responsible for X if you’ve signed up to X.
2. Interaction responsibility: You’re interaction-responsible for X if you’ve interacted with X in some way.
3. Negative responsibility: You’re negatively responsible for X if you could alter X with your actions.To illustrate: You’re formally responsible for saving a child drowning in a shallow pond if you’re a lifeguard at the pond, or if you’ve waded in and said “I’ve got it covered”. You’re interaction-responsible for the child if you waded in and tried to start helping the child. You’re negatively responsible for the child simply if you could help the child in some way — for example, if you could wade in and make things better — even if a lifeguard is looking on, and even if others have already waded in and tried to help.
(There are other generators of responsibility, too. There’s what we could call moral responsibility, for example if you deliberately pushed the child into the pond. Or causal responsibility, for example if you accidentally knocked the child into the pond. These are important, but not as relevant for the main issue I’m identifying.)I think that many EAs, especially core EAs, are likely to take both formal and negative responsibility unusually seriously. EAs tend to be very scrupulous about promises, which means they take formal responsibility particularly seriously. They also don’t place much weight on the acts/omissions distinction, which means they take negative responsibility particularly seriously.
This alone squeezes out “interaction” responsibility: if you place more weight on formal and negative responsibility, that means you have to place less weight on interaction-responsibility. But I think many EAs are also less likely to see interaction-responsibility as generating special obligations in and of itself, in the way that many in the wider world do. This is discussed at length in a couple of insightful and important posts, The Copenhagen Interpretation of Ethics by Jai and Asymmetry of Justice by Zvi Moshowitz.
A final hypothesis concerns a notion of responsibility that’s in between formal and interaction responsibility, let’s call it blocking-responsibility. You’re blocking-responsible for X if, in virtue of trying to help with X, you’ve prevented or made it much harder for anyone else to help with X, and other people would be helping with X if you weren’t trying to help with X.For example, if you wade in and help the child, but in doing so prevent other people from helping the child, and other people would help the child if you didn’t, that generates something much more like formal responsibility than interaction-responsibility.
It’s plausible to me that, often, onlookers perceive some organisation or person as signing up to “own” an issue (formal responsibility) or preventing others from helping on that issue (blocking-responsibility), when the organisation or person just sees themselves as trying to help, where the alternative is that no one helps (so they think they are interaction-responsible but not blocking-responsible).
On either of the last two hypotheses, we end up with a dynamic where:
1. Person Y helps with X, does an ok job.
2. Onlooker is critical and annoyed, like “Why aren’t you doing X better in such-and-such a way?”
3. Person Y is like, “Man, I’m just trying to do my best here; you’re giving me responsibilities that I never signed up for. The alternative is that to one does anything on X, and these criticisms are making that alternative more likely.”Onlooker feels either like they are trying to help, or that they are simply holding accountable people who’ve adopted positions of power. Person Y feels like not only have they taken on a cost in trying to help with X, but now they’re getting criticised for it, too!
That’s all been pretty abstract, and I’ve been staying abstract because any particular instance will throw up a lot of additional issues. But I feel this dynamic comes up all the time, especially for things around “running the community”, and it doesn’t get called out because Person Y doesn’t want to appear defensive.
I’m really worried about this dynamic: if we don’t address it, it means that Onlooker is unhappy because they feel like people in power aren’t doing a good enough job and they aren’t being listened to; it means that Person Y feels like they are having to pay the tax of dealing with criticism just for trying to help, and it makes them less likely to want to help at all. The article I linked to on do-ocracy has some nice examples of this dynamic, suggesting that this is a widespread phenomenon.
- ^
I added “culture” late on in drafting this post. But the more I reflect on this, the bigger a deal I think it is. Burning Man is centralised in the sense that there’s a single organisation that runs it, but the culture it tries to cultivate at least aspires to be semi-anarchist. In EA, we see both decentralised and centralised cultural elements. It’s a decentralised culture insofar as, relative to many other cultures, it prizes independence of thought, and is open to contrarianism. It’s centralised insofar as people are often highly scrupulous, and can feel like they’re being a “bad EA” in some way if they aren’t acting in line with the wider group, and will be negatively judged. I think the highly critical culture, especially online, contributes to pressures towards conformity as a side-effect; people worry that if they say or do something different, they’ll get attacked. Personally, at least, I think that this latter aspect is one of the threads within EA culture I’d most like to see change.
- ^
We can make “decision-making power” more precise by breaking it down into three sub-types. You can take action because someone else has told you do that action for a number of different reasons, including:
Authority: When you do X because Y has told you to do X and because there’s some power relationship between you and Y (e.g. boss and employee) such that Y could and would inflict bad consequences on you (e.g. docked pay) if you don’t do X.
Deference: When you do X because Y thinks you should do X, and you trust their judgement. You might not know or understand Y’s reasons behind wanting X to happen.
Persuasion: When you do X because Y thinks you should do X, and convinces you with compelling reasons why doing X is a good idea.
I think that EA, in practice, is fairly decentralised if we’re looking at Authority (it’s very rare that I see someone giving orders and others following those orders without at least broadly understanding and (at least to some extent) endorsing the reasons behind them), and in the middle on Deference and Persuasion (I think it’s fairly common for people to work on specific areas because they think that better-informed people than them think it’s important, even if they don’t wholly understand the reasons). In general, I would like more of a move towards Persuasion over Deference, but that move is not trivial: there are major benefits from division of intellectual labour, and a significant amount of intellectual division of labour is inevitable. - ^
Someone on the Forum made this point earlier in the year. I forget who, but thank you!
- ^
This argument for free markets comes originally from The Use of Knowledge in Society by Friedrich Hayek (more here). I don’t know what the best source to learn about this is; a quick google suggests that this is helpful; GPT-4 also gives a reasonable overview.
- ^
For more discussion of the EA marketplace analogy, see Michael Plant’s essay here, and comments.
- ^
This was a significant issue in the earlier days of EA. See for example, this discussion of Intentional Insights.
- ^
When I was getting to grips with climate economics, it was striking to me how long the reliance on integrated assessment models had persisted, despite how inadequate they seemed to be. One explanation I heard was founder effects: Bill Nordhaus was the first serious economist to produce seminal work on climate change, and pioneered integrated assessment models. That resulted in a sort of intellectual lock-in .
- ^
Of course, EA is defined by a particular mindset, set of interests, and moral and methodological views, so it can’t be open to any set of beliefs. (Trivially: if you want to maximise suffering, you don’t have a place in EA.) It’s a hard question what we should lock in as definitional of EA, and what we shouldn’t. I presented my earlier attempt at this in my article on the definition of effective altruism (which received significant help in particular from Julia Wise and Rob Bensinger) and in CEA’s guiding principles, which I helped with.
- ^
For more on what constitutes a field, here’s an edited take from GPT-4, which I think is pretty good: “A “field” can be defined as a specific area of knowledge or expertise that is studied or worked in. It’s an area that has its own set of concepts, practices, and methodologies, and often has its own community of scholars or practitioners who contribute to its development.
Fields are often characterised by their methods, by a body of knowledge within them, by a community of scholars or practitioners who contribute to the field, by institutions and organisations that support that community, and by a set of goals and values.”
This thought seems continuous with how CEA’s comms team is thinking about things. - ^
In footnote 5 I distinguish between different sorts of decision-making influence. What I’m aiming towards is trying to reduce the amount of Authority I have, and try to discourage Deference.
- ^
Some people who gave comments thought that this name is actually a way in which EA is decentralised—because anyone can comment and influence how EA is perceived. But it seems to me like it at least increases the extent to which third parties see EA as A Single Thing. In analogy, if either of Leiter Reports or Daily Nous (the two main philosophy blogs) were called “The Analytic Philosophy Forum”, that would seem like a move in the direction of centralisation to me, at least on the Perception dimension. But perhaps this is just a case where it’s not clear what “centralised” vs “decentralised” means.
- Quick Update on Leaving the Board of EV by 3 Apr 2024 0:13 UTC; 357 points) (
- A robust earning to give ecosystem is better for EA by 11 Nov 2023 22:17 UTC; 300 points) (
- Announcing Manifund Regrants by 5 Jul 2023 19:42 UTC; 217 points) (
- 26 Jun 2023 11:36 UTC; 172 points) 's comment on William_MacAskill’s Quick takes by (
- Will MacAskill has stepped down as trustee of EV UK by 21 Sep 2023 15:41 UTC; 141 points) (
- Observations on the funding landscape of EA and AI safety by 2 Oct 2023 9:45 UTC; 136 points) (
- 21 Sep 2023 15:39 UTC; 126 points) 's comment on William_MacAskill’s Quick takes by (
- Key takeaways from our EA and alignment research surveys by 3 May 2024 18:10 UTC; 103 points) (LessWrong;
- 18 Apr 2024 9:39 UTC; 99 points) 's comment on Personal reflections on FTX by (
- Some Reflections on EA Strategy Fortnight by 29 Jun 2023 17:46 UTC; 83 points) (
- Announcing Manifund Regrants by 5 Jul 2023 19:42 UTC; 74 points) (LessWrong;
- Key takeaways from our EA and alignment research surveys by 4 May 2024 15:51 UTC; 64 points) (
- Short bios of 17 “senior figures” in EA by 29 Jun 2023 17:20 UTC; 62 points) (
- Managing risks while trying to do good by 1 Feb 2024 18:08 UTC; 61 points) (LessWrong;
- CEA is fundraising, and funding constrained by 20 Nov 2023 21:41 UTC; 58 points) (
- Should EA ‘communities’ be ‘professional associations’? by 4 Apr 2024 4:50 UTC; 54 points) (
- 18 Nov 2022 21:05 UTC; 44 points) 's comment on Does Sam make me want to renounce the actions of the EA community? No. Does your reaction? Absolutely. by (
- Managing risks while trying to do good by 1 Feb 2024 14:24 UTC; 42 points) (
- What we talk about when we talk about community building by 27 Jun 2023 17:17 UTC; 41 points) (
- Coaching matchmaking is now open: invest in community wellness by investing in yourself by 17 Jul 2023 11:17 UTC; 39 points) (
- 27 Aug 2023 10:53 UTC; 20 points) 's comment on How much do EAGs cost (and why)? by (
- 6 Jul 2023 21:08 UTC; 19 points) 's comment on Some Reflections on EA Strategy Fortnight by (
- 20 Jan 2024 12:09 UTC; 9 points) 's comment on Patrick Gruban’s Quick takes by (
- 31 Aug 2023 23:42 UTC; 7 points) 's comment on Rob Gledhill’s Quick takes by (
- 30 Jun 2023 9:01 UTC; 3 points) 's comment on Short bios of 17 “senior figures” in EA by (
- 23 Feb 2024 17:04 UTC; 1 point) 's comment on Is waste management a neglected cause area? by (
Thank you! This post says very well a lot of things I had been thinking and feeling in the last year but not able to articulate properly.
I think it’s very right to say that EA is a “do-ocracy”, and I want to focus in on that a bit. You talked about whether EA should become more or less centralized, but I think it’s also interesting to ask “Should EA be a do-ocracy?”
My response is a resounding yes: this aspect of EA feels (to me) deeply linked to an underrated part of the EA spirit. Namely, that the EA community is a community of people who not only identify problems in the world, but take personal action to remedy them.
I love that we have a community where random community members who feel like an idea is neglected feel empowered to just do the research and write it up.
I love that we have a community where even those who do not devote much of their time to action take the very powerful action of giving effectively and significantly.
I love that we have a community where we fund lots of small experimental projects that people just though should exist.
I love that most of our “big” orgs started with a couple of people in a basement because they thought it was a good idea.
Honoring the taking of action and supporting people who take action is really great and I hope it remains a core part of EA culture indefinitely.
I always feel a bit sad when I see “EA should …” posts. I want to say: “Maybe you should do it! Look at the fire within you that made you write that long critique, could you nurture that into a fire that could actually make it happen?”. The idea that you might just write an angry critique and hope that some mandarin with centralized power will pick it up is very sad to me and antithecal to (my conception of) the EA spirit.
I think this is related to an important feature of a do-ocracy which is that you don’t have any voice if you don’t do. You can persuade, but nobody has any obligation whatsoever to listen to you. It’s not a democracy (and that’s good). I think this confuses people a lot.
I definitely think there’s a “generational” thing here. For those of us who’ve been around long enough to see how everything came from nothing but people doing things they thought needed to be done, it’s perfectly obvious. But I can very much see how if you join the community today it looks like there are these serious, important organizations who are In Charge. But I do think it’s still not really true.
+1.
I was slow to realise that, over the period of just a few years of growth, this bunch of uncertain, scrappy, loosely coordinated students had come to be seen as a powerful established authority and treated accordingly. I think many others have been rather slow to notice this too and that that’s been a big source of confusion and tension as of late.
Thanks for this comment, it’s very inspiring!
One thought I had is that do-ocracy (as opposed to “someone will have got this covered, right?”) describes other areas, as well as EA. On the recent 80k podcast, Lennart Heim describes a similar dynamic within AI governance:
“at some point, I would discover that compute seems really important as an input to these AI systems — so maybe just understanding this seems useful for understanding the development of AI. And I really saw nobody working on this. So I was like, “I guess I must be wrong if nobody’s working on this. All these smart people, they’re on the ball, they got it,” right? But no, they’re not. If you don’t see something covered, my cold take is like, cool, maybe it’s actually not that impactful, maybe it’s not a good idea. But whatever: try to push it, get feedback, put it out there, talk to people and see if this is a useful thing to do.
You should, in general, expect there are more unsolved problems than solved problems, particularly in such a young field, and where we just need so many people to work on this. So yeah, if you have some ideas of how your niche can contribute, or certain things where you don’t think it’s impactful just because we haven’t covered it yet, that does not mean it’s not a good thing to go for. I encourage you to try it and put it out there.”
(The conversation continues in a helpful way beyond that point.)
Leopold Aschenbrenner points to a somewhat similar dynamic on the technical side in Nobody’s on the ball on AGI alignment:
“Observing from afar, it’s easy to think there’s an abundance of people working on AGI safety. Everyone on your timeline is fretting about AI risk, and it seems like there is a well-funded EA-industrial-complex that has elevated this to their main issue. Maybe you’ve even developed a slight distaste for it all—it reminds you a bit too much of the woke and FDA bureaucrats, and Eliezer seems pretty crazy to you.
That’s what I used to think too, a couple of years ago. Then I got to see things more up close. And here’s the thing: nobody’s actually on the friggin’ ball on this one!”
I think the upside is that if it is “generational” people grow up and become more agentic as long as we foster the culture. I was remarking to a friend that it’s interesting how people don’t want to get up and learn to code to help with AI Safety (given the rates of AI doomerism) but people were willing to go into quant trading at seemingly higher rates to earn to give in early EA.
What cultural and structural features do you think might contribute to the perceived decline in a just-do-it attitude?
While I think there is considerable merit to what you’re saying, I think it’s also important to acknowledge the existence of challenges for would-be doers in 2023 that weren’t necessarily (as) present in 2008 or 2013. Some of these challenges are related to the presence and/or actions of big organizations and funders (e.g., the de-emphasis on earning to give affecting the universe of potential viable funders for upstarts). Others are related to changes in the meta more generally (e.g., a small group birthing a startup in the first wave’s signature cause area—global health—without outside help or funding is probably easier than doing the same in AI safety).
(this is just personal anecdote, so it shouldn’t be interpreted with too much confidence. Like all anecdotes, it may not generalize)
I only started to discover EA in 2020, so I think it is reasonable to say that I am of the newer “EA generation.” There are a few things that I’ve vaguely noticed within myself when I’ve thought of starting projects. Some are social/prestige/reputational things, some are financial stability things, and some are related to lack of skills. I’ll phrase these as “things my brain tells me, whether I agree with them or not:”
There are organizations with fairly wide-ranging remits that already exist, so I probably don’t need to start Project X, because they have more connections/expertise/context and are more well-placed to start it.
I don’t have the skill/knowledge/experience to do Project X well. The people in the EA community have really high standards, so I probably wouldn’t get clients for my consulting firm or funding for my charity if I am only able to do it fairly well, because they would want me to do it extremely well.
I don’t want to start something and have it fizzle out, because people in this relatively tight-knit/interconnected community would all see me fail, and then it would be really hard for me to do anything else in EA.
I don’t have anything prestigious to show. I didn’t start and sell my own company, I didn’t attend Yale, I didn’t work at McKinsey. I’m not able to signal impressiveness, and lacking some kind of signaling people won’t pay attention to me.
I don’t want to pursue an opportunity that only has finding guaranteed for 6-12 months. If I had several years of living expenses available then I could pursue more risky paths, but if finding a job might take between NUM1 and NUM2 months and I have less than NUM2 months of living expenses available, then pursuing risky paths seems too risky.
I would love the community to be more supportive in ways that would help with that. Things I would like:
Accept that new projects may be not that great, encourage them to grow and maybe even chip in as well as criticising.
Accept and even celebrate failure.
Even more incubator style things. I love what CE does here.
I’m not immediately seeing how any of this contributes to a decline in a just-do-it attitude?
Michael_PJ seems to be talking about what happens when people see problems within EA (”...who not only identify problems in the world...who feel like an idea is neglected...I always feel a bit sad when I see “EA should …” posts”).
I don’t think this applies to your first two bullets, where you seem to be talking about newer people thinking the existing people are doing a much better job than they could.
And your last three bullets seem to apply ~equally to both older and newer people (unless by bullet four you actually mean something closer to my previous sentence).
A few thoughts:
The level of quality and professionalism has risen since the old days which makes it intimidating to contribute your own half-assed thing.
Doing things does usually require time, and a lot of the early doing was done by students (and still is!). It’s much harder to be that involved when you’re older without becoming professionally involved. These days we have a lot more non-students!
I think all Will’s stuff about the perceived allocation of responsibility and control has a big impact.
I’m not super convinced that the fundraising situation is tougher? It seems much easier to me than it was. Especially for small things we have a decent range of funders.
I mean, we started out without any earning-to-give people funding us. I think that’s more the period Michael_PJ is referencing here (“most of our “big” orgs started with a couple of people in a basement”).
And it was the early emphasis on earning to give by these “big organizations and funders” that meant there were any earning-to-give people.
It feels a bit unfair to act like these orgs/funders are to blame for why EAs today find it more challenging to get funding, when these orgs/funders are the reason there’s any funding for other EAs at all and who didn’t receive any pay themselves when they started building these orgs? I don’t follow.
I don’t see where I am casting “blame” on anyone. I’m glad the megadonors chose to give to EA causes rather than (e.g.) stocking university endowments. It was reasonable to place less emphasis on earning-to-give in light of projections at the time.
However, it also seems that but for the introduction of megadonors and de-emphasis of EtG, there would be greater diversification of funding sources than actually happened.
Generally speaking, people wanting to try new things in an established ecosystem face different challenges from those wanting to create a new ecosystem. I’m not opining on whether those challenges are greater or worse than those present in 2008 or 2013. But I think it’s important to understand why some members of the community don’t seem to feel an empowering just-do-it spirit.
But you’re talking about a “decline in a just-do-it attitude” caused by “challenges for would-be doers in 2023 that weren’t necessarily (as) present in 2008 or 2013″, but then seem to be saying that ‘Now we have tons of money from Open Phil and a lot from other places’ is a ‘challenge’ that EAs today face that wasn’t as present in 2008 or 2013...because back then there was hardly any money at all.
And I’m saying that I don’t see how having money now (skewed heavily to one funder) is supposed to explain a decline in a just-do-it attitude?
(I realise that you also say “a small group birthing a startup in the first wave’s signature cause area—global health—without outside help or funding is probably easier than doing the same in AI safety” but that seems very non-obvious to me and in fact I would have guessed the opposite.)
[Written in a personal capacity, etc. This is the first of two comments: second comment here]
Hello Will. Glad to see you back engaging in public debate and thanks for this post, which was admirably candid and helpful about how things work. I agree with your broad point that EA should be more decentralised and many of your specific suggestions. I’ll get straight to one place where I disagree and one suggestion for further decentralisation. I’ll split this into two comments. In this comment, I focus on how centralised EA is. In the other, I consider how centralised it should be.
Given your description of how EA works, I don’t understand how you reached the conclusion that it’s not that centralised. It seems very centralised—at least, for something portrayed as a social movement.
Why does it matter to determine how ‘centralised’ EA is? I take it the implicit argument is EA should be “not too centralised, not too decentralised” and so if it’s ‘very centralised’ that’s a problem and we consider doing something. Let’s try to leave aside whether centralisation is a good thing and focus on the factual claim of how centralised EA is.
You say, in effect, “not that centralised”, but, from your description, EA seems highly centralised. 70% of all the money comes from one organisation. A second organisation controls the central structures. You say there are >20 ‘senior figures’ (in a movement of maybe 10,000 people) and point out all of these work at one or the other organisation. You are (often apparently mistaken for) the leader of the movement. It’s not mentioned but there are no democratic elements in EA; democracy has the effect of decentralising power.
If we think of centralisation just on a spectrum of ‘decision-making power’, as you define it above (how few people determine what happens to the whole) EA could hardly be more centralised! Ultimately, power seems the most important part of centralisation, as other things flow from it. On some vague centralisation scale, where 10⁄10 centralisation is “one person has all the power” and 1⁄10 is “power is evenly spread”, it’s … an 8/10? If one organisation, funded by two people, has 70% of the resources, considering that alone suggests a 7⁄10. (Obviously, putting things on scales is silly but never mind that!)
Your argument that it’s not centralised seems to be that EA is not a single legal entity. But that seems like an argument only against the claim it’s not entirely centralised, rather than that it’s not very centralised.
All this is relevant to the point you make about “who’s responsible for EA?”. You say no one’s in charge and, in footnote 3, give different definitions of responsibility. But the key distinction here, one you don’t draw on, seems to be de jure vs de facto. I agree that, de jure, legally speaking, no one controls EA. Yet, de facto, if we think about where power, in fact, resides, it is concentrated in a very small group. If someone sets up an invite-only group called the ‘leaders’ forum’, it seems totally reasonable for people to say “ah, you guys run the show”. Hence the claim ‘no one is in charge’ doesn’t ring true for me. I don’t see how renaming this the ‘coordination forum’ changes this. Given that EA seems so clearly centralised, I can’t follow why you think it isn’t.
You cite the American Philosophical Association as a good example of “not too centralised”. Again, let’s not focus on whether centralisation is good, but think about how central the APA is to philosophy. The APA doesn’t control really any of the money going into philosophy. It runs some conferences and some journals. AFAICT, its leaders are elected by fee-paying members. As Jason points out, I wonder how centralised we’d think power in philosophy were if the APA controlled 70% of the grants and its conferences and journals were run by unelected officials. I think we’d say philosophy was very centralised. I think we’d also think this level of centralisation was not ideal.
Similarly, EA seems very centralised compared to other movements. If I think of the environmental or feminist movements—and maybe this is just my ignorance—I’m not aware of there being a majority source of funding, the conferences being run by a single entity, there being a single forum for discussion, etc.. In those movements, it does seem that, de facto and de jure, no one is really in charge. As a hot take, I’d say they are each about 2-3/10 on my vague centralisation scale. Hence, EA doesn’t match my mental image of a social movement because it’s so centralised. If someone characterised EA as a basically single organisation with some community offshoots, I wouldn’t disagree.
I’ll turn to how centralised EA should be in my other comment.
These are two examples, but I generally didn’t feel like your reply really engaged with Will’s description of the ways in which EA is decentralized, nor his attempt to look for finer distinctions in decentralization. It felt a bit like you just said “no, it is centralised!”.
I don’t agree with this at all. IMO democracy often has the opposite effect, and many decentralized communities (e.g. the open-source community) have zero democracy. But I think this needs me to write a full post...
This seems false to me. If the only kind of decision you think matters is funding decisions, then sure, those are somewhat centralised. But that’s not everything, and it’s far from clear to me why you think that’s the only thing that matters?
For example, as Will discusses in the post, even amongst the individual EA orgs:
There are many of them, and they are small
They basically all do their own strategy and planning
Sure doesn’t look like centralized decision-making to me. You could say “For any decision, OP could threaten to refuse to fund an organization unless it made the choice that OP wants, therefore actually OP has the decision-making power”. But this seems to me to just not be a good description of reality. OP doesn’t behave like that, and in practice most decisions are made in a decentralized fashion.
This equivocates between saying that power does resides a small group, and saying that we have created the perception that power resides with a small group. As I already argued, I think the former is false, and Will explicitly agrees with the latter and thinks we should change it.
My overall impression of your post is that it seems to me that you think the non-diversity of funding is bad (which I think we all agree on), but that for some reason funding is the only thing that matters when it comes to whether we describe EA as centralized or not.
Whereas to me EA looks like a pretty decentralized movement that currently happens to have a dominant funder. Moreover, we’re lucky in that our funder doesn’t (AFAIK) throw their weight around too much.
I think you mean something like “CEA’s strategy should be determined by the vote of (some set of people)”, which is a fine position to have, but there are clearly democratic elements in EA (democratically run organizations like EA Norway, individuals choosing to donate their money without deference to a coordinating body, etc.).
This is a tangent, but I thought I’d say a bit more about how we’ve done things at EA Norway, as some people might not know. This is not meant as an argument in any direction.
Every year, we have a general assembly for members of EA Norway. To be a member, you need to have paid the yearly membership fee (either to EA Norway or one of the approved student groups). The total income from the membership fee covers roughly the costs of organising the general assembly. The importance of the membership fee is mainly that it’s a bar of entry to the organisation, makes it clear if you’re a member or not, and it’s nice and symbolic that the fees can cover the general assembly. However, I think the crucial thing about how we’re organised at EA Norway isn’t that members pay a fee, but that the general assembly is the supreme body of the organisation.
During the general assembly, the attending members vote on an election committee, board members, and community representatives. During the general assembly, the members can also bring forward and vote on changes to the statutes and resolutions. Resolutions are basically requests members have for the board, that they’re asking the board to look into or comment on until the next general assembly. The general assembly also need to approve an annual report of activities and a financial report.
The election committee is responsible for finding candidates for the different positions, and nominate candidates to the board ahead of the next general assembly.
The board is responsible for setting a strategy for the organisation and assessing the Executive Director. Historically, the board has set 3-year strategies for the org, including objectives and metrics for those objectives. The Executive Director is tasked with carrying out that strategy and need to regularly report on the progress of the metrics to the board. Redacted meeting minutes from each board meeting are made available to the members in an online community folder.
Community representatives are available to members who want to raise small or big issues that they feel like they can’t raise elsewhere. They can’t have any other position at the organisation. Per the statutes, the community representatives are to be involved as early as possible in any internal conflict, breach of statutes or ethical guidelines, and other matters that might be harmful for the members or EA Norway.
Hi Ben. It’s a pity you didn’t comment on the substance of my post, just proposed a minor correction. I hope you’ll be able to comment later.
You point out EA Norway, which I was aware of, but I think it’s the only one and decided not to mention it (I’ve even been to the annual conference and apologise to the Norwegians—credit where credit’s due). But that seems to be the exception that proves the rule. Why are there no others? I’ve heard on the grapevine that CEA discourages it which seems, well, sinister. Seems a weird coincidence are nearly no democratic EA societies.
You say
“There are clearly democratic elements in EA [… E.g.] individuals choosing to donate their money without deference to a coordinating body”
I think you’ve misunderstood the meaning of democracy here. I think you’re just talking about not being a totalitarian state, where the state can control all your activities. I believe that in, say, Saudi Arabia (not a democracy) you can mostly spend your money on what you want, including your choice of charity, without deference to a coordination body.
Thanks for the nudge! Yeah I should have said that I agree with a lot of your comment. There are a few statements that are (IMO) hyperbolic, but if your comment was more moderate I suspect I would agree quite a lot.
I disagree though that this is a “minor correction” – people making (what the criticized person perceives as) uncharitable criticisms on the Forum seems like one of the major reasons why people don’t want to engage here, and I would like there to be less of that.
I think Efektivni Altruismus is similar (e.g. their bylaws state that members vote in the general assembly), and it has similarly been supported by a grant from CEA.
I’m glad someone mentioned national membership associations! I haven’t done a formal tally but I think Germany and Switzerland are also membership associations. I quite like the idea for EA Netherlands (I’m the co-director but here I’m speaking in a personal capacity).
If we had more national membership associations we could together set up a supranational organisation to replace much of CEA. Like other membership associations, this would have a general assembly, a board, committees, and an executive office. It’d be different from Michael’s suggestion in that the fee-paying would be done by the national orgs. I.e., the members would be EA Switzerland, EA Netherlands, etc., and they would send delegates to the General Assembly.
This organisation could then provide relevant public goods, e.g., international networking via the EAG event series and the EA Forum, community-building training via the CBG programme, or anything else its members might consider valuable (e.g., advocacy work). Off the top of my head, an analogous organisation might be the Dutch Association of Municipalities (VNG). You can read about how the VNG is governed here and what they do here.
This could also help diversify funding in community building. Right now, most national EA organisations get nearly all of their money from CEA, and CEA gets nearly all of its money from OP’s Effective Altruism Community Growth (Longtermism) programme. Naturally, this means national organisations are incentivised to engage in more longtermist community building than they are in GHD or animal welfare community building, and we don’t know if this is what the EA community wants.[1]
From what I understand, most national EA membership associations don’t raise much from their membership fees, but perhaps this could change. For example, the other weekend I visited the Lit and Phil in Newcastle. They’ve been going for over 200 years. Members pay GBP 150 per year and they have over 1000. That kind of setup would go a long way in funding an org such as EA Netherlands.
Of course, whether this should be a decision that’s made by the EA community democratically, or by some other body such as the coordination forum, is something we haven’t decided.
I think one large disadvantage of a membership association is that it will usually consist of the most interested people, or the people most interested in the social aspect of EA. This may not always correlate with the people who could have the most impact, and creates a definitive in and out.
I’d be worried about members voting for activities that benefit them the most rather than the ultimate beneficiaries (global poor, animals, future beings).
Yes these are things I worry about too!
First, about the risk of a membership association selecting for the people most interested in EA, the same holds for the current governance structure (but even more so). However, I don’t think this is such a terrible thing. It can be an issue when you’re a political party and you have a membership that wildly diverges from the electorate, thus hampering their ability to select policies/leaders that appeal to the electorate. But we aren’t a political party.
Second, about the risk of a membership association selecting for those who are mostly interested in the social aspect of EA, I don’t think this is necessarily the case. Do you think people join Greenpeace for the social side of things? You’d have to pay to become a member, and it would come with duties that, for most people, aren’t very exciting (voting, following the money, etc). I’d be more worried about it selecting for people with political inclinations. But even then, it isn’t a given that this would be a bad thing.
Lastly, your worry that members would vote for activities that benefit them the most, this is perhaps the main reason I think we ought to consider a more democratic movement. After all, the same risk holds for the current governance structure (to err is human). A big benefit of a membership association is that you have mechanisms to correct this; a core duty of membership would be holding the leaders to account.
In my opinion, the biggest issue with making the movement more democratic is that it could make things complicated and slow. This might make us less effective for a while. But, it might still be better in the long run.
EA isn’t a political party but I still think it’s an issue if the aims of the keenest members diverges from the original aims of the movement, especially if the barrier to entry to be a member is quite low compared to being in an EA governance position. I would worry that the people who would bother to vote would have much less understanding of what the strategic situation is than the people who are working full time.
Maybe we have had different experiences, I would say that the people who turn up to more events are usually more interested in the social side of EA. Also there are lot of people in the UK who want to have impact and have a high interest in EA but don’t come to events and wouldn’t want to pay to be a member (or even sign up as a member if it was free).
I think people can still hold organisations to account and follow the money, even if they aren’t members, and this already happens in EA, with lots of critiques of different organisations and individuals.
For better and/or for worse, the membership organization’s ability to get stuff done would be heavily constrained by donor receptivity. Taking EA Norway as an example, eirine’s comments tell us that (at least as of ~2018-2021), “[t]he total income from the membership fee covers roughly the costs of organising the general assembly,” that “board made sure to fundraise enough from private donors for” the ED’s salary, but that most “funding came from a community building grant from the Centre for Effective Altruism (CEA)” (which, as I understand it, means Open Phil was the primary ultimate donor).
To me, that both constrains both how thoroughly democratic a membership association would be and how far afield from best practices a democratic membership association could go.
Re divergence, there will always be people who want to move the movement in a different direction. More democracy just means more transparency, more reasoning in a social context,[1] more people to persuade, and a more informed membership. Hopefully, this stops bad divergence but still allows good pivots.
The downside is that everything takes longer. Honestly, this is perhaps my biggest worry about making things more democratic: it slows everything down. So, for example, the pivot from GHD to longtermism in EA’s second wave would probably have taken much longer (or might not have occurred at all). If longtermism is true, and if it was right for EA to make that pivot, then slowing that pivot down would have been a disaster.
I don’t think I understand why you think having a voting membership would mean more social events. Could you explain it to me? I think it would make the movement more responsive to what the community thinks is best for EA, and I think there’s a case to be made that thousands of brains are better than dozens. This might mean more social events, but it might mean fewer. Let’s have the community figure it out through democracy.[2]
Yes, people can definitely hold people to account without being members, but they have far less ‘teeth’. They can say what they think on the forum, but that’s very different from being able to elect the board members, or pass judgements as part of a general assembly.
See Sperber and Mercier’s ‘The Enigma of Reason’ for why this might be a good thing
Personally, I think we should do fewer purely social events, but we should do more things that are both impactful and social.
I mean, there is no state, so I guess I just don’t understand analogy you’re drawing. If EA had democratic control of funding, you wouldn’t describe that as a “democratic element”?
But it sounds like we agree there is at least one democratic element, which is all that is needed to disprove the original claim, so probably no need to pursue this thread anymore. Thanks for the response!
I’m not sure yet about my overall take on the piece but I do quibble a bit with this; I think that there are lots of simple steps that CEA/Will/various central actors (possibly including me) could do, if we wished, to push towards centralization. Things like:
Having most of the resources come from one place
Declaring that a certain type of resource is the “official” resource which we “recommend”
Running invite-only conferences where we invite all the people that are looked-up-to as leaders in the community, and specifically try to get those leaders on the same page strategically
Generally demonstrating intensely high levels of cooperativeness with people who are “trusted” along some shared legible axis, and much lower levels of cooperativeness with outsiders
Stop publishing critical info publicly, relying on whisper networks to get the word out about things
I didn’t start off writing this comment to be snarky, but I realized that we are, kind of, doing most of these things. Do we intend to? Should we maybe not do them if we think we want to push away from centralization?
Thanks! I agree that we are already (kind of) doing most of these things. So the question is whether further centralisation is tractable (and desirable). Like I say, it seems to me the big thing is if there’s someone, or some group of people, who really wants to make that further centralisation to happen. (E.g. I don’t think I’d be the right person even if I wanted to do it.)
Some things I didn’t understand from your bullet-point list:
By “resources” do you primarily mean funding? (I’ll assume yes.)
Here, by “resource” do you here mean information (books, etc)? (I’ll assume yes.)
This doesn’t clearly map onto “centralised” vs “decentralised” to me?
Of your list, the first two bullet-points seem non-desirable to me in a totally ideal world. But of course having lots of funding from OP is much, much better than not having the funding at all!
The second two bullet points seem good to have, even if EA were more decentralised than it is now.
Yeah, sorry, I wrote the comment quickly and “resources” was overloaded. My first reference to resources was intended to be money; the second was information like career guides and such.
I think the critical-info-in-private thing is actually super impactful towards centralization, because when the info leaks, the “decentralized people” have a high-salience moment where they realize that what’s happening privately isn’t what they thought was happening publicly, they feel slightly lied-to or betrayed, lose perceived empowerment and engagement.
I feel like you … maybe did not try very hard to brainstorm incremental pro-centralisation steps? I set aside 5 minutes and came up with 17 options, mainly quite tractable, that CEA/EV/OP could do if they wished, starting with very simple ideas like “publicly announce that centralisation is good”.
(Not sharing the list because I’m not convinced I want more power centralised).
Reading this post is very uncomfortable in an uncanny valley sort of way. A lot of things said is true and needs to be said, but the all over feeling of the post is off.
I think most of the problem comes from blurring the line between how EA functions in practice for people who are close to money and the rest of us.
Like, sure EA is a do-ocracy, and I can do what ever I want, and no-one is sopping me. But also, every local community organiser I talk to talks about how CEA is controlling and that their funding comes with lots of strings attached. Which I guess is ok, since it’s their money. No one is stopping anyone from getting their own funding, and doing their own thing.
Except for the fact that 80k (and other though leaders? I’m not sure who works where), have told the community for years, that funding is solved and no one else should worry about giving to EA, which has stifled all alternative funding in the community.
(Just wanted to add a counter datapoint: I have been a local community organizer for several years and this has not been my experience.)
Talking from my time in EA NTNU, my experience was indeed the complete opposite. Funding and follow up from CEA was excellent, kind and thoughtful. There were virtually zero strings attached and at no point did I feel like they were controlling.
The feelings of other organisers might differ of course, but I’ve not heard about this from anyone personally, and I did talk to quite a lot of student group leaders around 2017-2019.
Again, this is just my experience.
I wasn’t sure about the ‘do-ocracy’ thing either. Of course, it’s true that no one’s stopping you from starting whatever project you want—I mean, EA concerns the activities of private citizens. But, unless you have ‘buy-in’ from one of the listed ‘senior EAs’, it is very hard to get traction or funding for your project (I speak from experience). In that sense, EA feels quite like a big, conventional organisation.
I think there is a steelman of your argument which seems more plausible to me, but taken at face value this statement just seems clearly false?
E.g. there are >650 group organizers – how many of them do you think have met the people on that “senior EA’s” list even once? I haven’t even met everyone on the list, despite being on it!
When I think of highly centralized “conventional organizations” I think of Marissa Mayer at Google personally choosing the fonts of new projects and forcing everyone to queue outside her office because even executives weren’t allowed to make decisions without her in-person approval. This seems extremely far from how EA works?
Yeah, I guess I mean genuinely new projects, rather than new tokens of the same type of project (eg group organisers are running the same thing in different places).
As MacAskill points out, it’s pretty hard to run $1m+/yr project (or even less, tbh) without Open Philanthropy supporting it.
But, no, I’m not thinking about centralisation in terms of micro management, so I don’t follow your comment. You can have centralised power without micromanagent.
What does it mean to have centralized power without micromanagement? Like I could theoretically force a group organizer to use a different font, I just choose not to?
To take one of the top examples in the post’s centralization continuum, presumably the US military counts as having a highly centralized power structure despite the President and Secretary of Defense not micromanaging. People lower on the food chain exercise power delegated and re-delegated from those two, but they are the ultimate fount of power.
They have the right to control—and responsibility to supervise—the powers they have delegated downward. With some uncommon arguable exceptions like military judges, one in the military has or exercises power independently of POTUS and SECDEF. And people know that if they use their delegated power in ways that would anger those higher up, they won’t have that power for too much longer.
That’s how power ordinarily works in larger centralized contexts; big-company CEOs refusing to delegate font-approval authority is very much the exception.
Thanks! Just to double check: this is agreeing with my example, right? Like you are pointing out that the US military is centralized because POTUS could theoretically tell a random private what to eat for lunch (but chooses not to), similar to how I could theoretically[1] force a group organizer to use a different font (I just choose not to)?
If so: I would be surprised if the average private says things like “I’m eating potatoes for lunch because it’s impossible to do projects without buy-in from POTUS”? I agree there some technical sense in which that’s true, but a new recruit who oriented to their work by thinking they can only do stuff by convincing POTUS would probably struggle to navigate the military, similar to how I claim that an EA who believes they can only do stuff by convincing “senior EAs” is going to struggle to navigate EA (even under the assumption that EA is as centralized as the military).
I actually don’t think I could do this
Whether you could get someone nominally ‘under’ you to do an arbitrary thing is not a good proxy for power.
CEA is a regular hierarchical company, but it would still go very poorly if you decided to, on a power trip, tell one of your employees what to eat for lunch. This mostly doesn’t matter, though, because that is a goal you are very unlikely to have.
As a co-organizer of the Boston Meetup, if you sent me an email demanding that we serve potatoes at the next gathering, I would be very confused. But you could get CEA’s groups team to come up with guidance on meetup food, heavily influence that process, and I could then receive an email advocating serving potatoes from people I trusted and who I was pretty sure had thought about it a lot more than I had. Which would have a decent chance of resulting in potatoes at the next meetup.
Power is always, in a technical sense, indirect: no one is pulling levers inside other people’s heads to get them to do things. There is always some amount of inspiration, persuasion, threat, or other intermediary. Sometimes this is formalized, sometimes “soft”, but that mostly only matters for legibility. Maybe a better measurement for power is something like, if there’s something important about the way things are currently done that you want to change, how likely and how much are you able to cause that change?
By that measure, OP has a tremendous amount of power: through a combination of employing highly respected people and having control over the funding of most EA work they can make large and deep changes to how the EA movement grows and what work is carried out under its banner.
Thanks! This feels like a reasonable definition, but seems different from what Michael was talking about? He said:
If CEA tried to push some dietary standard I’m pretty sure there would be a ton of complaints and blowback. But even if we somehow kept going through all of that, I’m pretty sure you would still be able to run a potato-less meet up, which doesn’t feel consistent with the “need buy-in” claim.
(Whereas in “big, conventional organizations” if the CEO says “the cafeteria is going to serve potatoes” then the chefs don’t have much of a choice.)
I think there’s an important difference to be made between “level of centralization” in general and “level of power centralization.” When people are saying “EA is too centralized,” I think they are predominately referring to the latter concept.
Moreover, to the extent that the text above is breaking down centralization into sub-dimensions, and then impliedly taking something like the mean score of sub-domains to generate an overall centralization score, I don’t think that would be correct. Rather, I think the overall centralization measure is strongly influenced by the sub-dimension with the highest centralization score, especially where that dimension is decision-making / control of resources.
As an example, imagine analytic philosophy (AP) except that its meta organizations and individual practitioners (or at least university departments) were dependent on a funding ecosystem of short-term grants. Moreover, in this hypothetical, 70% of them are made by a single grantmaker and the overwhelming majority by a few dozen grantmakers who do not rubber-stamp grant renewals. Based on Will’s rating of EA, this would seem to move AP’s decision-making concentration to “in the middle.” If one determines centralization by some sort of averaging-like mechanism, this wouldn’t move AP’s overall centralization average that much. But I suspect that funding structure would have a huge practical impact on AP’s centralization (or that of any other academic field).
“to the extent that the text above is breaking down centralization into sub-dimensions, and then impliedly taking something like the mean score of sub-domains to generate an overall centralization score.”
Thanks for pointing this out! I didn’t intend my post to be taking the mean score across sub-domains; I agree that of the dimensions I list, decision-making power is the most important sub-dimension. (Though the dimensions are interrelated: If you can’t tightly control group membership, or if there isn’t legal ownership, that limits decision-making power in some ways.)
To make sure I understand your view better, on my spectrum (From North Korea to the US startup scene) do you think I placed EA-as-it-currently-is too low on the centralisation spectrum? I said current EA is “similar to fairly decentralised religious groups, and many social movements”.
(Fine if your answer is “this spectrum doesn’t make any sense” → it’s pretty subjective!)
I think it is toward the more centralized side of the spectrum than that. I tentatively place it somewhere between the Scouts and sports organizations.
The Spectrum
I think the spectrum makes sense as long with two caveats. First, these organizations/movements differ on certain sub-dimensions. Reasonable people could come up with different rankings based on how they weight the various sub-dimensions.
Second, in some examples there are significant differences between the centralization of the organization per se and how much influence that organization has over a broader field of human activity. I think we’re mainly trying to figure out how centralized EA is as a field of endeavor, not as an organization (since it isn’t one). Thus, my model gives significant weight to the field-influence interpretation, especially by considering how feasible it is to seriously practice the field of activity (e.g., basketball) apart from the centralized structure. However, I’ve tried not to write off the organizational level entirely.
Comparison to Relatively Decentralized Religious Groups
I’ll take the Southern Baptist Convention (in the US) as an example of a “fairly decentralised religious group.” It is on the decentralized end of Protestantism (which was one of your examples), but that seems fair given if EA is to be placed between those groups and some social movements. In addition, there are a large number of Baptist and other churches in the US that aren’t part of anything larger than themselves, or are part of networks even weaker than the SBC.
Recently, the SBC kicked out some churches for having female pastors. Getting kicked out of the SBC is very rare, which is itself evidence of lower control, but main consequence for those churches is basically . . . they can’t advertise themselves as part of the SBC. There’s no trademark on “Baptist,” or centralized control who joins an SBC church (that is decided by leaders in the individual church). Movement in/out of the SBC, and between SBC churches is pretty easy. There are few formal barriers on forming your own church and applying for its membership in the SBC. There isn’t a huge centralized information flow given the emphasis on the Bible and the hostility to a priestly class. With the exception of startup churches in a transitional period, there is no centralized financial support.
But here’s the critical part to me—there are a lot of small Baptist churches. If you can get support from 100 people to form a congregation (that’s a tiny fraction of Baptists in the US), you can form your own church and do most of the things that Baptist churches do like administer baptisms and communion, preach, send missionaries, etc. Being in good standing with the SBC doesn’t give other churches a huge competitive advantage—SBC membership is actually unattractive to many Baptists (either because it is seen as too conservative or too accomodating).
Moreover, even control over local churches is decentralized to an extent—leaders are generally elected, and every pastor knows that people also vote their displeasure with their feet and their pocketbooks. There are many dozens of small Baptist churches in the smallish county where my parents still live. So I would rate fairly decentralised religious groups as pretty low on almost all sub-dimensions, including the most important one.
Where to Place Current EA?
Recognizing the limitations of the scale, I would tentatively place EA somewhere between the Scouts and certain sports organizations. Comparing mixed economies to EA would require too much space; I’ll leave that as undefined.
Generally More Centralized than Sports Organizations
The sports organizations are more structured than EA in terms of legal structure, membership, and so on. However, I think that is generally outweighed by their very incomplete control over their fields. Taking USA Basketball as an example, there is a lot of important basketball that happens outside their influence or at least control, including the most economically and socially significant basketball. The NBA answers to no one, high-school basketball is run by state-level associations of high schools, etc. Running your own independent basketball league is also very plausible, and in most cases I’m not aware of a massive advantage of affiliating it with USA Basketball.
Significantly and effectively practicing EA apart from Open Phil and the organizations it heavily funds is possible . . . but I would submit that it is noticeably harder than significantly and effectively practicing basketball without being in communion with USA Basketball.
Generally Somewhat Less Centralized than the Scouts
I’m going to place EA as less centralized than the Scouts (here operationalized as Scouts BSA, f/k/a The Boy Scouts of America, since that’s the org with which I am most familiar). However, it’s important to note the extent to which the centralized organization’s power is significantly checked by a variety of independent actors. First off, the local councils are legally independent with their own independent funding networks and relationships, and the national organization needs them almost as much as they need the national org. Thus, it would be fair to characterize them as more like US states than divisions of a single entity. Breaking up with a local council would probably mean the end of Scouting BSA in that council’s jurisdiction. I don’t think Open Phil’s relationship with its median grantee has the same federalist nature.[1]
Second, there are a number of external constraints on decisionmaking. Scouts BSA is significantly dependent on corporate sponsors and private philanthropy (especially after its bankruptcy...), and the need to maintain financial support is a check on centralized power. It is also significantly dependent on religious and other organizations that serve as sponsoring organizations for individual troops. Finally, it needs to keep its volunteers and parents—who derive no financial advantage from affiliation with Scouting—happy.
So while there is a good deal of formal concentration of decision-making power, there are also some very real constraints on that power. Those constraints come from entities and people who are not financially intertwined with Scouts BSA. And they are not pro forma—if I recall correctly, the LDS (Mormon) church was by far the largest sponsoring organization and walked away with its members to run its own program because it didn’t like the way things were going with Scouts BSA. Likewise, I believe a decent number of Scouts in the conservative Southern US moved away.
Also of note: there is an opportunity to do Scouting-like things outside of Scouts BSA in the US, but there are rather significant disadvantages that make alternative practice a less than full substitute. For instance, status as an Eagle Scout carries significant prestige in many places, and isn’t available outside Scouts BSA.
Although I’m not going to suggest that major EA power centers experience no outside checks on their exercise of power, it seems to me that those checks are much weaker than in Scouts BSA, and are often not held by people who are financially and otherwise independent from the power centers.
It is true that the individual councils are themselves centralized. However, I believe they are even more constrained by some of the factors in the next paragraph than the national org.
[footnote deleted]
Hi Will,
Thanks for the post. I think the below statement is inaccurate
A single funder (Open Philanthropy, “OP”) allocates the large majority (around 70%[2]) of funding that goes to EA movement-building. If you want to do an EA movement-building project with a large budget ($1m/yr or more), you probably need funding from OP, for the time being at least. Vaidehi Agarwalla’s outstandingly helpful recent post gives more information.
Whilst I agree OP is the large majority as you mention and the concentration of decision making within that could be a problem, you could have movement building project with budget over $1m a year not having funding from OP—Longview is an example.
On Vaidehi’s post, I went back to my record and my donation alone is more than 3x the total in other donors category in her post. If other donors are included it could be out by more than 15x. I am working with Vaidehi to get a more accurate total.
I do agree that it is important to diversify the donor base and the many effective giving initiatives are important in that regard.
Hence Will saying “probably”?
Or do you think that despite OP providing the large majority, EA just has so much money at the moment (once you add in your donations and perhaps others’) that a new $1m/yr+ movement-building project can probably get funding from a non-OP source?
Given the shortage of funding for existing EA organisations, there is clearly not a lot of money at the moment. But I think if there is a new $1m/yr+ movement building project with exceptional risk adjusted expected impact it could probably get funding from non-op sources, but that will be at least partially at the expense of existing projects.
“If a proposed $1m/yr+ project has exceptional expected impact, non-OP sources will probably stop funding existing projects and fund you”
sounds like a high enough bar to me that
“A proposed $1m/yr+ project probably needs funding from OP”
is not inaccurate?
Thanks for this post! One thought on what you wrote here:
I feel unsure about this. Or like, I think it’s true we have those downsides, but we also probably get upsides from being in the middle here, so I’m unsure we’re in the worst of both worlds rather than e.g. the best (or probably just in the middle of both worlds)
e.g. We have upsides of fairly tightly knit information/feedback/etc. networks between people/entities, but also the upsides of there being no red tape on people starting new projects and the dynamism that creates.
Or as another example, entities can compete for hires which incentives excellence and people doing roles where they have the best fit, but also freely help one another become more excellent by e.g. sharing research and practices (as if they are part of one thing).
Maybe it just feels like we’re in the worst of both worlds because we focus on the negatives.
This seems true to me, although I don’t have great confidence here.
For some years at times I had thought to myself “Damn, EA is pulling off something interesting—not being an organization, but at the same time being way more harmonious and organized than a movement. Maybe this is why it’s so effective and at the same time feels so inclusive.” Not much changed recently that would make me update in a different direction. This always stood out to me in EA, so maybe this is one of its core competencies[1] that made it so successful in comparison to so many other similar groups?
It’s possible that there is a limit on how long you can pull it off when community grows, but I would be a bit slow to update during turbulent waters—there is for sure valuable signal during these (like “how well are we handling harsh situations?”), but also not so valuable (“is our ship fast?”).
Good explanation of core competencies—https://forum.effectivealtruism.org/posts/kz3Czn5ndFxaEofSx/why-cea-online-doesn-t-outsource-more-work-to-non-ea
(General question, not necessarily for Will in particular)
Re getting another regrants program started: has there been a look at how this went with Future Fund’s regranting program? I viewed it as pretty experimental, and I don’t have much sense of whether someone’s looked at the pros and cons of that system. Obviously that project came to a sudden end, so I understand why any planned analysis didn’t happen as planned.
We’ve just launched a regrants program inspired by the Future Fund! I think the FF team’s June 2022 writeup best captured the benefits of the system, but I don’t think anyone has posted an analysis since then—which we’re (naturally) very interested in.
Looking through FF’s database of past regrants, I was impressed by several regrants which were early to identify work I would describe later as quite good. Examples include the Future Forum, Dwarkesh Patel, and Quantified Intuitions/Sage. Of course, we (Manifold Markets) ourselves were the recipient of a $1m regrant which was the anchor for our seed round, so we’re fairly biased here.
I’ve been thinking about regranting on and off for about a year, specifically about whether it makes sense to use bespoke mechanisms like quadratic funding or some of its close cousins. I still don’t know where I land on many design choices, so I won’t say more about that now.
I’m not aware of any retrospective on FTXFF’s program but it might be a good idea to do it when we have enough information to evaluate performance (so in 6-12 months?) Another thing in this vein that I think would be valuable and could happen right away is looking into SFF’s S-process.
I think MoreGood would be a great rebrand for the forum!
Just want to register some disagreement here about the name change, to others in this thread and Will (not just you Gemma!). In rough order of decreasing importance:
I really don’t like the name MoreGood. It’s a direct callback to LessWrong. I don’t want to have to endorse LW to endorse EAF, or EA more generally, or the causes we care about, and this name change would signal that. Yes, there’s some shared intellectual history, but I don’t think LW-rationalism is inherent to or necessary for EA.
For people new to/interested in EA, they’ll probably search for “EA” or “Effective Altruism”. They wouldn’t know the rebrand or name change unless there was a way to preserve it for SEO.
I think EA Forum is fine, and it is the major place for EA discussion online at the moment. I don’t think it’s that unrepresentative of EA?
Any other online forum will also be skewed towards those online or ‘extremely online’. I think EA Twitter is much worse for this than the Forum.
In the spirit of do-ocracy, there’s no reason that other people can’t set up an alternative forum with a different focus/set of norms, though it will probably suffer from the network effects that make it difficult to challenge social media incumbents.
I do accept it was just a small draft suggestion though.
Some thoughts from me (as a big fan of MoreGood):
I don’t think it would signal this to many people.
To me this is a feature, not a bug. I personally think having a slightly higher barrier to entry (you have to be engaged enough to have found the forum via other means than the first page of Google results) would do this forum good overall.
I think having a very descriptive name is probably not worth the increase in times this forum gets quoted with more apparent authority than it actually has. [Edit: This is quite theoretical. These are the only actual examples I can think of right now and they’re basically fine.]
Agreed. It’s still a downside to me that a less clear name means that there’ll be more fairly engaged EAs who end up with just Twitter etc. to discuss EA online.
Sure, but the name change would make people feel more empowered to? (And I’m undecided on whether more forums would be good or bad.)
I wonder if “the CEA forum” would work? Low edit distance, gives the idea that it’s related to EA while not necessarily representing all of it. Downside is that it works less well if CEA changes their name.
I like there being a centralised forum which attempts good epistemics.
Let’s compare to twitter, where incentives are towards controversy and views, I am glad that there is a nexus of EA comment on this forum.
I don’t know that a decentralised set of forums would have been able to reduce the presence of community discourse, and I think that has been healthy for us as a community.
In short, I am not sure that we are well integrated enough as a community (particularly at the speed of growth) to be decentralised fully across digital environments.
Good name though
Oh I didn’t read Will as proposing multiple forums (although what he says is compatible with that proposal).
I thought he was saying that the name should better reflect how representative the forum is of EA thought at large. (The ‘decentralisation’ aspect being moving from the impression of ‘This forum is the main hub of all EA thought’ to ‘This forum is the main hub of Extremely Online EA thought’.)
I mean, I think it would have the effect of endorsing that, which I disprefer.
Though you make a good point about extremely onlineness.
I think the intention wasn’t “have lots of forums where EA topics are discussed”, so much as “don’t make it sound like the (in practice, one) forum is the only one that can be”.
I can’t help but notice that MoreRight is the inverse of LessWrong, even though I like MoreGood far way better than MoreRight. 😂
FYI to LW old-timers, “MoreRight” evokes the name of a neo-reactionary blog that grew out of the LW community. But I don’t think it’s a thing anymore?
I’d also be concerned that “MoreGood” could evoke “MoreRight” in addition to “LessWrong”. While LW association could go either way, MR association (and neoreaction in general) I’d like us to stay far away from!
Wow, what a curious piece of LessWrong history. Thanks for sharing!
The forum naming conversation feels like an example of something that’s been coming up a lot that I don’t have a crisp way of talking about, which is the difference between “this is an EA thing” as a speech act and “this is an EA thing” as a description. I’m supportive of orgs and projects not branding themselves EA because they don’t want to or want to scope out a different part of the world of possible projects or don’t identify as EA. But I’m also worried about being descriptively deceptive (even unintentionally), by saying “oh, this website isn’t really an EA thing, it’s just a forum where a lot of EAs hang out.” That feels like it confuses and potentially deceives in a way I don’t like. Don’t know how to thread this needle, seems hard!
This is honestly the best idea I’ve heard in a long time!
In my opinion, the largest effect of rebranding the name of the forum is that newcomers searching for “effective altruism” for the first time would be less likely to find the forum, particularly if alternatives to the forum do some SEO. This has both upsides (people are less likely to be intimidated/skeeved out by weird stuff or community drama, people’s first exposure to EA-in-practice wouldn’t be filled with Extremely Online people), and downsides (whatever else they see instead may be less good as introductions, eg by being more manufactured to be presentable, rather than having mostly earnest conversations).
I’m not convinced that a name change would be net positive, but if we want to make it clearer than the forum doesn’t necessarily represent EA, one option is to have the name be less descriptive and just reference something vaguely positive instead (ideas include: polaris, salon, agora, zephyr, etc). This is akin to how Sierra Club is clearly not representing all of environmentalism, and Leiter Reports is clearly not representing all of philosophy.
I spontaneously thought that the EA forum is actually a decentralizing force for EA, where everyone can participate in central discussions.
So I feel like the opposite, making the forum more central to the broader EA space relative to e.g. CEAs internal discussions, would be great for decentralization. And calling it „Zephyr forum“ would just reduce its prominence and relevance.
Yeah, seems helpful to distinguish central functions (something lots of people use) from centralised control (few people have power). The EA forum is a central function, but no one, in effect, controls it (even though CEA owns and could control it). There are mods, but they aren’t censors.
I think this is a place where the centralisation vs decentralisation axis is not the right thing to talk about. It sounds like you want more transparency and participation, which you might get by having more centrally controlled communication systems.
IME decentralised groups are not usually more transparent, if anything the opposite as they often have fragmented communication, lots of which is person-to-person.
[Written in a personal capacity, etc. This is the second of two comments, see the first here.]
In this comment, I consider how centralised EA should be. I’m less sure how to think about this. My main, tentative proposal is:
We should distinguish central functions from central control. The more central a function something has, the more decentralised control of it should be. Specifically, I suggest CEA should become a fee-paying members’ society that democratically elects its officers—much like the America Philosophical Association does.
I suspect it helps not just to ask “how centralised should EA be” but also “what should be centralised and what shouldn’t?”. Some bits are, as you say, natural monopolies in that it’s easiest if there’s one of them. This seems most true for places where people meet and communicate with each other: a conference is valuable because other relevant people are there. For EA, I guess the central bits are the conferences, the introductory materials, the forum, the name(?), maybe other things. In my post on EA as a marketplace, which you kindly reference but don’t seem sympathetic to, I point out you can think of EA on a hub-a-spoke model. Imagine a bicycle wheel, where the rim represents the community members and the spokes the connections. There are some bits we widely participate in, the hub, such as the conference. But, besides that, people have links to only a subset: longtermists hang out mostly with longtermists, etc.
Now, it does not follow that simply because something has a central function, it’s used by lots of people, that it should be centrally controlled, i.e. controlled by a few people. In fact, we often think the opposite is true. The more central something is, the more often we think it should be democratically controlled. The obvious example of this is the state. It has a big impact on our lives, it’s a natural monopoly, so we tend to think democracy is good to make it accountable. Rule of thumb, then: central role, decentralised control.
Another example of this is the one you gave, the American Philosophical Association. It’s pretty useful to have a place that convenes those who have a common interest—in that case, doing academic philosophy. The APA seems useful and unobjectionable (this is my impression of it, anyway). But, the reason for this is that it doesn’t and can’t do anything besides serve and convene its members. It doesn’t take sides in philosophical debates or try to steer the field. People would object if it tried. How is this unobjectionableness achieved? I imagine it’s to do with the fact it’s a fee-paying society where those in positions of power are elected by members, as well as the fact it doesn’t control lots of funding. Despite being philosophers, I doubt the members of the APA would want it to be run by unelected philosopher kings! Roughly, the moral of the story seems to be that, if something is so central you can’t avoid participating in, you probably want decentralised power.
You talk about various possible ways to centralise or decentralise EA. But why is there no suggestion of democratising the central element of EA, namely the Centre for Effective Altruism? Here’s a concrete suggestion: CEA becomes like the APA: a fee-paying membership society where the members elect the trustees or officials, who then administer various central functions, like a conference, journals, etc. Is this an absurd idea? If so, why? It’s hardly radical. Members’ organisations are a default coordination solution where people have a common interest. I’m not sure it’s a brilliant idea, but it’s weird it’s not been discussed, and I’d be happy for someone to tell me why would be terrible.
Worried no one would join? If you want to kick-start it, you could provide a year’s free membership to those who have signed the GWWC pledge or attended an EA conference. There may be other ideas. EAs already focus on much harder tasks like influencing the next million years and ending poverty. Surely setting up a membership society, a solved-problem, is not insurmountable.
There’s been a lot of discussion recently about people not feeling like they’re part of EA. Well, here’s a cheap solution: let people become members of CEA. Then, you’re in and you can have a say in how things are run. This has other advantages: it makes CEA accountable to its members. It also allows it to genuinely speak for them, which currently it can’t do, because it doesn’t represent them. By charging a membership fee, CEA can offset the costs of other things, so won’t be so reliant on other donors. Honestly, this membership scheme would probably work if it were just about the EA conferences (the “EA conference Association”?).
Who should be against this idea? I recognise some effective altruists are sceptical of applying democracy to philanthropy, but, to be clear, I am not advocating for communism, that all of EA or ‘EA resources’ should come under common control: that Open Philanthropy should give 1/Nth of its resources to the N people self-describing as EA, or that if you want to become part of EA you need to give the community all your (spare) money. That’s absurd. I am only in favour of democratising the central convening and coordinating parts; this is like saying the APA should be a democratic entity, not that all philosophy and philosophy funding should be run by a democracy. As far as I can see, all the central elements of EA fall under CEA (I’m also not against spinning off parts of EVF.)
I don’t think it makes sense to have a democracy for the non-central elements—the spokes. I believe in free enterprise, including free philanthropic enterprise, and private ownership.
Where does Open Philanthropy fit into this, given it has most of the ‘EA money’? I’m not sure it does fit in. It’s fortunate there’s one really big donor (because there’s more money), and unfortunate there’s only one (but that one will have outsized influence). I think society should control some of people’s income through taxation. But I also believe people should have private property, and where they spend that (assuming it’s inside the law) is best left up to them, rather than trying to also make that part subject to democratic control. Hence, insofar as worries about centralisation spring from their being a single, huge funder, I don’t have a neat solution to that. That doesn’t mean other things couldn’t be done, however, such as democratising CEA. Indeed, if CEA were a democratic society, I’d be fairly relaxed about Open Philanthropy providing funding to it, because control of CEA would still be decentralised, so that would mitigate concerns about undue influence. Of course, I have views about how, morally speaking, people should spend their post-tax wealth, but that seems a separate issue.
Finally, you raise the point of who has the ‘legitimacy’ to centralise or decentralise EA, and say it should come from CEA or Open Philanthropy (the use of ‘legitimacy’ is somewhat interesting, because that has democratic connotations). At this point, I should probably mention I applied to be a board member of Effective Ventures, and in my application form, I explicitly stated I was interested in exploring bringing democratic elements into CEA, and so decentralising it. I didn’t make it to the first stage (I also asked for feedback and was told EV couldn’t provide any). Now, I am not claiming I am a ‘slam dunk’ choice for the board—EA has many highly talented people. However, I did find it discouraging, not least because I am interested in, and would have had legitimacy for, exploring institutional reforms. It reduced my confidence that, despite what you say, ‘central EA’ really is open to diverse voices or further decentralisation.
I think it’s a mistake to conflate making things more democratic or representative and making them more decentralised—historically the introduction of more representative institutions facilitated the centralisation of states by increasing their ability to tax cities (see e.g. here). In the same way I would expect making CEA/EVF more democratic would increase centralisation by increasing their perceived legitimacy and claim to leadership.
Yes, I think there’s a lot of sliding between “decentralised” and “democratic” even though these have pretty much nothing to do with each other.
As a pretty clear example, the open source software community is extremely decentralised but has essentially zero democracy anywhere.
I take it you’re saying making things more democratic can make them more powerful because they then have greater legitimacy, right? More decentralised power → large actual power?
I suppose part of my motivation to democratise CEA is that it sort of has that leadership role de facto anyway, and I don’t see that changing anytime soon (because it’s so central). Yet, it lacks legitimacy (i.e. the de jure bit), so a solution is to give it legitimacy.
I guess someone could say, “I don’t want CEA to have more power, and it would have if it were a members society, so I don’t want that to happen”. But that’s not my concern. If anything, what your comments make me think is (1) something like CEA should exist, (2) actual CEA does a pretty good job, (3) nevertheless, there’s something icky about its lack of legitimacy (maybe I’m far more of an instinctive democratic that I thought), (4) adding some democracy stuff would address (3).
I’m confused about the mathematics of a a fee-paying membership society. I’m having a hard time seeing how that would generate more than a modest fraction of current revenues.
It’s not clear what the “central convening and coordinating parts” are. Neither Current-CEA nor Reformed-CEA would have a monopoly on tasks like funding community builders, funding/running conferences, and so on. They are just another vendor who the donors can choose to hire for those purposes. There is and would be no democratic mandate that donors who would like to fund X, Y, and Z are obliged to go through CEA.
I think your model is correct insofar as the membership society could assert independent control of certain epistemically critical functions that are relatively less reliant on funding (e.g., the Forum).
The extent to which “convening and coordinating” is effective may depend on whether there is money behind those efforts. Stated more directly, to what extent are CEA’s efforts in these areas boosted by the well-known (general yet strong) alignment between CEA and the major funder in the ecosystem? Would Reformed-CEA enjoy the same boost?
I used to work at EA Norway, which is a fee-paying membership society, and thought it might be useful to share more on how our funding worked. This is just meant as an example, and not as an argument for or against membership societies. (Here’s a longer comment explaining how we organise things at EA Norway.)
I can’t speak to EA Norway’s current situation, as I no longer have any position at EA Norway (other than being a paying member). However, I can say what it was like in 2018-2021 when I was Executive Director (ED). The total income from the membership fee roughly covered the cost of the general assembly. Most of our funding came from a community building grant from the Centre for Effective Altruism (CEA). However, the board made sure to fundraise enough from private donors for my salary. The two main reasons for this was to I) diversify our funding, and II) enable us to make longer term plans than CEAs grant periods.
When the board gave approval to accept the community building grant from CEA, we discussed that if at any point we did not want to follow CEAs guidelines and success metrics, we would pay back the remainder of the grant. This was definitely easier for us to say and truly mean when we had covered the ED’s salary from other sources, as it meant that if we were to return the funding, we would still have at least one employee. We never ended up disagreeing so much with CEA that we wanted to return the funds, though we were definitely very vocal about any disagreements we had with the groups team at CEA and did push for some changes.
I’m confused why you think this is required, I don’t think Michael implied it would.
The society wouldn’t be a good replacement for CEA unless it could attract significant major donor support. As the next paragraph implies, there’s no reason for major donors to support the society if they judge an alternative vendor to be more effective in delivering conferences, etc. As a result, the society would either have to adapt its programs to meet the scoring metrics of the big donors (in which case the democratic nature of the organization isn’t doing much work; the money is still calling the shots) or it would lack funding to perform those functions (in which case the organization isn’t effective on those functions).
As my third paragraph suggested, there are functions the membership society could potentially run on member revenue and small donations. But that is a significant tradeoff.
Yeah, I’ve not spent loads of time trying to think through the details. I’m reluctant to do so unless there’s interest from ‘central EA’ on this.
As ubuntu’s comments elsewhere made clear, it’s quite hard for someone to replicate various existing community structures, e.g. the conferences, even though no one has a literal monopoly on them, because they are still natural monopolies. If you’re thinking “I can’t imagine a funder supporting a new version of X if X already exists”, then that’s a good sign it is a central structure (and maybe should have democratic elements). There are lots of philosophy conferences, but that doesn’t take away from the value of having a central one.
Also, you make the point “well, but would reformed-EA be worth doing if the main funder wouldn’t support it?”. Let’s leave that as an open question. But I do want to highlight a tension between that thought and the claim that “EA is not that centralised”. If how EA operates depends (very) substantially on one what a single funder thinks, we should presumably conclude EA is very centralised. Of course, it’s then a further question of whether or not that’s good and what, if anything, should be done by various individuals about it.
Yes, I think the proposal effectively highlights that EA is significantly more centralized than some claim.
My guess is that you would have to add a claim like “Funders should not fund ‘central convening and coordinating’ functions except as consistent with the community’s will” to get anywhere with your proposal as currently sketched. That’s a negative norm, less demanding than an affirmative claim to funding. But I haven’t exhaustively explored the possibilities either.
My own view is that a member-led organization is probably viable and a good idea, but has to be realistic about what functions it could assume.
Well, you’re not going to fund stuff if you don’t like what the organisation is planning to do. That’s generally true.
I don’t mind the idea of donors funding a members’ society. This happens all the time, right? It’s just the leaders have to justify it to the members. It’s also not obvious that, if CEA were a democratic society, it would counterfactually lose funding. You might gain some and lose others. I’m not sure I would personally fund ‘reformed-CEA’ but I would be more willing to do so.
Okay, but the American Philosophical Association “was founded in 1900 to promote the exchange of ideas among philosophers, to encourage creative and scholarly activity in philosophy, to facilitate the professional work and teaching of philosophers, and to represent philosophy as a discipline” with a modern mission as follows ” promotes the discipline and profession of philosophy, both within the academy and in the public arena. The APA supports the professional development of philosophers at all levels and works to foster greater understanding and appreciation of the value of philosophical inquiry.” Seems like a membership structure works well.
If, on the other hand, the APA’s mission was to “help solve the greatest philosophical problems of our time by supporting philosophers” or some such, I personally think that a more meritocratic approach seems like a better fit. It’s certainly not obvious to me that a democratic membership structure would be superior.
Or if it were a charity that ultimately had a global mission, I’d hardly expect their mission to best be served by giving as much decision-making power to an intern as a co-founder, even if the charity had a lot of power over the lives of their staff (which presumably it would).
Besides, the APA is just one example of a centralized service in analytic philosophy—Will lists several others, none of which seem democratically run to me (but I admit I haven’t checked[1]).
Yes, it’s the obvious example. The state is extremely different to EA. It’s generally hard-to-impossible to escape the state you’re born into and it has an enormous effect of your life. I don’t think you should adopt a rule of thumb “central role, decentralised control” based on the example with the strongest case for democracy.
Edit: If you want to judge for yourselves, the other examples are journals; the Philosophical Gourmet Report—which surveys ‘leading’ philosophers—and a competitor; the Stanford Encyclopedia of Philosophy; a range of services created by two philosophers; surveys of journal rankings; two news aggregator blogs.
Fair—but you probably wouldn’t pick EA’s structure either.
We like our current main billionaires, but from an ex ante perspective relying on billionaires to discern who the right leaders and technocrats are seems dicey. And of course, from the ex post perspective, we’ve had one awfully bad billionaire.
I didn’t expect people to agree with this comment, but I would be interested to know why they disagree! (Some people have commented below, but I don’t imagine that covers all the actual reasons people had)
Having read this I’m still unclear what the benefit of your restructuring of CEA is. It’s not a decentralising move (if anything it seems like the opposite to me); it might be a legitimising move, but is lack of legitimacy an actual problem that we have?
The main other difference I can see is that it might make CEA more populist in the sense of following the will of the members of the movement more. Maybe I’m as much of an instinctive technocrat as you are a democrat, but it seems far from clear to me that that would be good. Nor that it solves a problem we actually have.
I think the standard arguments for democratic membership associations apply. Increases in: membership engagement, perspective diversity, legitimacy and trust (from POV of members), accountability, transparency, and perhaps also stability (less reliant on individual personalities).
True, if there’s as much motivation for the latter as the former. Perhaps more relevantly, you already focus on the much harder task of ending depression. Surely you setting up a membership society, a solved-problem, is not insurmountable.
I agree. So better to send them a funding proposal for an EA membership society that you’re going to set up, rather than calling for one of their major grantees to be subject to democratic control?
More broadly, I want to push back on you thinking Open Phil doesn’t “fit in” here. What happens if CEA listens to you and completely restructures how their organization is run and Open Phil doesn’t want that?
There is a very big difference between “Surely setting up a membership society, a solved-problem, is not insurmountable” as directed at the collective leadership that controls about a half-billion in unrestricted spend a year, and the same comment directed at Michael personally. Many challenges are fairly solvable with access to significant monies, but have a low probability of success without that.
Moreover, Michael’s would-be task is harder than what he is saying the collective leadership should do. Legally forming a membership society is not difficult; equipping that society to actually do the stuff CEA/EVF does is the hard part. He would be creating an alternative meta structure that would have to compete with the existing one that has tens of millions per year in support. The potential donors are good EAs; they will look at a request to fund a global conference and consider the marginal value of an additional conference.
I found the second half of the comment to be helpful.
Yeah sorry I should have drawn a stronger link between the first and second half. As in, if Open Phil thinks it’s a good idea, they’ll let CEA do it or fund Michael to do it. If they don’t, CEA can’t do it and Michael can’t do it. That CEA currently has a lot more funding is not the issue.
But of course, Open Phil may well have greater trust in CEA’s general competence than Michael’s since they fund the former and not the latter, so maybe it wouldn’t be quite as easy as that (but maybe for good reason, hard to tell as an outsider). But the attitude of “This is so easy, why don’t you do it??” is so common on this forum and I think it’s holding EA back a lot, so I want to challenge it where I see it.
❤️
Also I wish people gave Giving What We Can more credit; it seems to me like they are basically doing this: membership org, relatively egalitarian donor base of 10k+ people, open access events, etc.
Same with EA Norway, Czech EA, and probably others.
Upvoted; thanks.
As for the last sentence, I think it depends on the nature of the criticism/proposal. Here, I think it’s fair to critique Michael’s proposal on the grounds that it does not acknowledge that the plausible range of action for almost anyone but Open Phil is substantially constrained by Open Phil’s willingness to go along.
That being said, “this seems fairly easy, is there a reason you you don’t do it” can be a valid line of argument in appropriate circumstances.
I’d also like to call positive attention to Michael taking a concrete step that could involve a significant personal commitment of time (i.e., applying to be on the EVF board) in addition to writing on the Forum about the issue he sees.
EA Norway did this! (Set up a membership society which includes voting rights in the organization, make a newsletter, run conferences, etc.)
I don’t at all want to diminish how hard they worked, but I don’t think it was as challenging as you imply (e.g. I don’t know their budget but I’m sure it’s way less than $500 million/year).
Setting up a society can definitely be done on much less than $500M/yr! The point of my remark was to contrast what Michael had called on leadership to do with what sounded like a suggestion that Michael do the same thing personally. The reference to $500M was meant to underscore the extreme difference between Michael’s ask and what was asked of him in return, not to suggest $500M was necessary to form a society.
That being said, while I haven’t seen a recent budget for CEA, my assumption is that running an organization that could serve as a potential replacement for CEA (which is what this particular subthread was about in my view) would cost tens of millions USD per year. Michael’s view (as roughly/imprecisely summarized by me) is that many central coordinating functions (e.g., “natural monopolies”) currently handled by CEA should be instead run by a membership society. So the example of EA Norway doesn’t really update my belief in my asserion in the context of this discussion that “[l]egally forming a membership society is not difficult; equipping that society to actually do the stuff CEA/EVF does is the hard part.”
The main crux here might be the extent to which CEA has a monopoly on supporting people who want to do good effectively.
To the extent that it is a monopoly, it’s harder for people to start new projects in the space simply because they didn’t get there first.
To the extent that it isn’t a monopoly, anyone who thinks CEA could be much better can always try to start their own thing. Yes it would be very hard; it was very hard for the founders of CEA too.
But I think CEA is much less of a monopoly than it seems a lot of EAs think it is.
That’s part of the point of this post, right? There’s even an example of people starting a competitor to CEA in the ‘EA student group support’ space, getting funding from Open Phil, and having people like Will say they did a great job. And before Probably Good, there was only one org providing EAs with careers advice; but instead of calling for 80,000 Hours to make big changes to the way 80,000 Hours thinks they should run their free service, Omer and Sella started Probably Good, with financial support from Open Phil and encouragement from 80,000 Hours. In the ‘EA career support’ space, there’s also now Successif, Magnify Mentoring and High Impact Professionals, each focusing on areas they thought needed more attention.
“Ah, but the conferences are much more important as a centralized function and they are basically a monopoly.” In 2018, CEA gave a $10,000 grant to a competitor conference that had 100 attendees.
“But the EA Forum!” There are tons of Slack spaces and Facebook groups etc. not run by CEA—CEA is definitely not in control of all online discussion between EAs. But maybe a competitor forum is next on the list (not something Michael’s particularly concerned about though, so maybe someone else wants to have a go).
“Community Health!” Oh my god, if you found a successful competitor to the Community Health team, I will shower you with praise and gratitude. And I wouldn’t be surprised if they did too.
“Okay, maybe not CEA, but Open Phil!” Future Fund. Regardless of how FTX turned out, this was at least a proof of concept.
“Look, EA just needs to be radically different but there’s already an EA!” Start your own movement. Holden and Elie thought charity evaluators should be a lot better so they started GiveWell. The Oxford crew thought people should be doing good better so they started CEA. If you think EA just fundamentally needs to be more democratic but keep everything else the same, start a movement for Democratic Effective Altruism. I might even start one for Do-acratic EA.
I think that’s one major crux.
There’s likely a second crux that influences how one views the extent to which CEA/EVF is a “monopoly” or has extreme advantages. That is whether it is advisible for the same organization (EVF) to be the primary provider of many different kinds of important coordinating functions, or whether that gives it too much power.
If that isn’t a concern, then pointing to the existence and viability of organizations that work in the same spaces at CEA/EVF orgs is a fairly good response.
“Start your own orgs” is still a possible response if one concludes that CEA/EVF’s dominant market position in numerous forms of coordination is a problem. However, I think the difficulty level is raised two orders of magnitude from most of the examples you gave:
The first raise is that the new org has to outcompete the EVF org to displace the latter from its role as the primary provider of the coordination system.
The second raise is that this would need to happen over several different coordinating functions to reduce CEA/EVF’s influence to an appropriate level.
(Although I would prefer a meta with less power concentration, “democratic” is not the primary word I’d use to justify that preference.)
It’s definitely hard to replace CEA! But this thread has an air of helplessness, like there are only 10 people in the world who can do anything in EA, and this seems immediately falsified by the large number of people who are doing things in EA, including specifically the stuff Michael suggests like having membership societies.
(Note: I don’t know to what extent you endorse this view of helplessness, so feel a little like I’m picking on you here, but I feel fairly confident that the median reader would take away a sense of helplessness from this thread.)
How decision making actually works in EA has always been one big question mark to me, so thanks for the transparency!
One thing I still wonder: How do big donors like Moskovitz and Tuna and what they want factor into all this?
This tweet told me a lot:
Not sure about Tuna.
Cari is more engaged in the week to week than Dustin is.
Given that Open Phil is responsible for a large share of EA funding, including
apparently 70%a large share of movement building funding, too, should we consider them largely responsible for EA as a whole, even if not solely responsible?I’m wary of trying to treat Open Phil as 70% responsible for the community for a few reasons:
In practice, I can see this ending up as something more like 95%. Everyone else feels like they’re not responsible because they’re not mainly responsible.
Funding movement-building isn’t the only way to have responsibility for EA. If someone has never donated to movement-building, do they have zero responsibility? Even if they’re an associated public figure or talking about EA to the press or advocating for EA-aligned policy changes etc? The whole picture is actually pretty complex.
I think on the whole, EAs should move more in the direction of taking responsibility than pointing fingers (for reasons the OP mentions e.g. I think the attitude of “Open Phil’s got X covered” would generally make EA worse). I think it’s a bad sign that the first comment on this post is essentially, ”...So can we blame Open Phil?”
Having said that, I am surprised at how little people have been pointing fingers at Open Phil relative to EVF in recent months. I suspect that’s partly because a lot of people didn’t have a good sense of the funding landscape, so perhaps that 70% is a good stat to highlight.
FWIW, Open Phil is also largely responsible for non-movement building EA funding, but the rest of your comment still seems to stand replacing “movement-building” with “EA organizations/work”.
I think nuance is important here. Who should take what kind of responsibility? There should be responsibility to take at multiple levels (within an organization, the board, etc.), but Open Phil has the opportunity to deny funding and pressure organizations and individuals in different directions. Other than funding and Open Phil, there are internal decisions/processes, legal processes, shaming and disinvitations from EA events, maybe others. Even if those fail, don’t happen or don’t apply, we can still put pressure on their funding. If Open Phil is a major source of their funding, this will largely fall on Open Phil.
And Open Phil has a responsibility to do at least some due diligence, too.
Yes, sorry, nuance is important, I haven’t done the hard work of figuring out the details, and if you want to make EA better then it’s important to be aware of the key levers currently at play.
I’m just trying to push back on what I see as an unhealthy trend in EA away from the mindset of “How can we do better for the world?” towards “How can you do better for the world?” or even “How can you do better for me?” (Although I need to keep remembering that this phenomenon seems much more pronounced on this forum than IRL!)
I’m certainly not an expert in institutional design, but for what it’s worth, it feels really non-obvious to me that:
Like, I think projects find it pretty hard to escape the sense that they’re “EA” even when they want to (as you point out), and I think it’s pretty easy to decide you want to be part of EV or want to take your cues from the relevant OP team and do what they’re excited to fund, whereas ignoring consensus around you, taking feedback but doing something else, and so on, seem kind of hard, especially if your inside view is interested in doing something no one wants to fund!
I see EV as an EA organization, historically, by name (“effective”), by its board composition and by some of its own statements, especially its mission.[1] If an org doesn’t want to be perceived as part of the EA movement and potentially entangled with it in other ways, should they be housed by EV?
Yes I was very surprised to hear the suggestion that Longview, Wytham or the Gov.ai were not EA projects! This is also contradicted by previous statements from the board of EVF:
and again:
To make it even more clear, many of these projects used to be part of CEA.
I feel like orgs don’t get many benefits from being “publicly EA”, but they get some costs.
The narrow EA community seems good at knowing which projects are “basically EA”.
I think to non-EAs, the EA brand might be more of a liability for many orgs than a plus. (It also can be a liability for EA, in that if the org does poorly, EA could be blamed, like with FTX)
They probably get the benefit that they are more likely to get a lot of money from EA. I don’t think the “brand” is well known enough to be that much of a cost.
Maybe what’s going on here is vagueness, and me being unclear.
Jeff’s clarification is helpful. I could have just dropped “part of the EA movement or” and the sentence would have been clearer and better.
The key thing I was meaning in this context is: “Is a project engaging in EA movement-building, such that it would make sense that they at least potentially have obligations or responsibilities towards the EA movement as a whole?” The answer is clearly “no” for LEEP (for example), and “yes” for CEA. On that question, I would say “no” for GovAI, Longview or Wytham, though I’ll caveat I don’t lead any of those projects so that’s just my perception.
“To make it even more clear, many of these projects used to be part of CEA.”
If you mean CEA-the-project (not CEA-the-former-legal-entity), that’s true of EA Funds and GWWC (though GWWC predates CEA and was separate from it prior to merging and then separating again), but not the others.
If you mean CEA-the-legal-entity, the name change from CEA UK and US → Effective Ventures UK and US was when the legal entity started housing more projects that aren’t focused on EA movement-building, and was done in part so that a project’s being housed at EV UK or EV US wouldn’t be understood to mean it was engaged in EA movement-building. (Clearly we should have communicated better about this, as it’s led to a lot of confusion.)
The phrasing “don’t consider themselves to be part of the EA movement or engaged in EA movement-building” is ambiguous on whether both are true. If they mean it in the sense that “not all are both” then, for example, the claim that LV, WA, and GAI are not engaged in EA community building, and GAI is additionally not part of the EA movement would be consistent with your EV quotations.
I didn’t find it ambiguous. I interpreted it as “not (A or B)”, which is the same as “neither A nor B”, and “not A and not B”.
Not for multiple X.
Are we saying that LV is not engaged-in-EA-community-building and also not part-of-the-EA-movement, and also WA is not engaged-in-EA-community-building and also not part-of-the-EA-movement, and also GAI is not engaged-in-EA-community-building and also not part-of-the-EA-movement...or that for each project one or both apply (so that, say, LV could be part-of-the-EA-movement but not engaged-in-EA-community-building).
I guess there’s a fine line between “projects that don’t consider themselves to be part of the EA movement or engaged in EA movement-building” and “projects in effective altruism” e.g. IIRC Open Phil has been treading that line for years.
Also, if they don’t want to be part of the EA movement, should they limit the amount of (or share of their) funding they take from large EA funders like Open Phil and EA Funds, limit their attendance to EA events like EA Global, limit hiring of EA community members (including and especially the board) and maybe even limit personal relationships with members of the EA community? (I don’t know that they do or don’t.)
I mean there are other reasons for orgs to want to be housed under EV and their general marketing doesn’t mention EV (e.g. I didn’t know Centre for the Governance of AI was housed under EV until I read this post).
A factor in favour of a more coherent EA that this post misses is the importance of policy advocacy.
I think in almost all of the spheres we care about governments hold most of the levers. For instance, it would be within the power of many countries to unilaterally solve the global insecticide-treated bed net problem if they were sufficiently motivated. I could make this a very long list.
As a public servant and ministerial adviser, I’ve been on the receiving end of well-coordinated campaigns by global not-for-profits. They’re extraordinal good at getting their way. The example I usually think of is Save the Children. It’s hard to know for sure, but StC seems to have fewer people involved than EA and less money. But they have significant access to the leaders of dozens of countries; the ability to drive multilateral agreements through international decision-making bodies (see the UN Declaration on the Rights of the Child); and genuine geostrategic influence.
The EA movement has all the ingredients (global reach; motivated talent; money) necessary to have influence of that kind, or more. But we chose not to (for many of the reasons outlined above), and I think our impact suffers hugely because of that choice. I think we’ve made a bad deal. I would much rather we paid the price of coordination, managed the risks that result, and used it to be serious players in global policy.
I think popular support is a rather critical ingredient here—even if organizations like StC have relatively few people directly involved with the organization, most of them can plausibly claim to be speaking for the general population and can successfully generate political consequences through appeals to the public if their desires aren’t taken into consideration.
Although I would love the idea of trying to fund a similar movement to (e.g.) increase and improve global aid policy, this theory of change is limited by the need for the proposed action to be something the public can easily be convinced to support.
I’m not sure that’s true for two reasons:
First, there’s a lot of niche special interest groups that get their way with government. There’s lots of ways to pressuring government for a policy outcome that aren’t a simple popular appeal.
Second, I don’t think it’s impossible to build public support for many things we believe in. I think a message like “it’s unacceptable that in current year there are people who need something as simple as a net, but don’t have access to it—government could fix this today” could easily have popular appeal. Or at least sufficient popular that an alliance of governments would put a few per cent of their aid budgets on the problem and fix it. I agree that not everything we care about could work this way, but many things could.
While I am sympathetic to more policy work, I am not sure how this ties into centralization within EA more generally. Funding is very centralized, so that’s not an impediment to the big funder dropping nine figures a year on it lobbying if desired.
I think EA is generally a mediocre brand for lobbying efforts—it has too elitist a vibe (billionaires trying to influence how my tax money is spent) and will remain vulnerable to FTX attacks for at least several years. So beyond providing the funding, I think too much visible coordination with the rest of EA is likely to be net negative.
I think the ‘good vibes’ that help policy advocacy come (in part) from benefiting from other people’s positive externalities. That’s to say, I’d like us to be in a position where we can say “we’re the movement that achieved X, Y and Z. So when we ask your nation to (put 5% of its aid budget to bed nets) you should take us seriously”.
To the extent that we’re more centralised and coordinated, it’s easier to say “we’re the movement that achieved X, Y and Z”. When we intentionally fracture and distance ourselves, we also fracture and distance ourselves from those positive externalities.
Again, I recognise that there is a give and take here—risks and opportunities. I just think that we need to put this potential path to policy impact as an ‘opportunity’ that we largely pass-up when we choose to distance our work and organisations from other another.
Here is an anonymous poll.
You can see what others think and add your own points (by using the little edit button)
https://viewpoints.xyz/polls/decision-making-and-decentralisation-in-ea
see results: https://viewpoints.xyz/polls/decision-making-and-decentralisation-in-ea/analytics
What the poll looks like
Current agreement:
Current uncertainty:
Would one solution to the lack of diversity in funders be to break up OpenPhil? And I don’t just mean separate their different teams, I mean take some of their assets and make another completely separate and independent eg Longtermist grnatmaking organisation, with different staff, culture etc
Just a note that while forum users might have opinions on this proposal, this is ultimately a question for Cari and Dustin (I think this point is too often forgotten).
Yep, of course its their decision. But we can suggest whether we think it is the best thing to do with it. They could choose not to do that, doesn’t make it the right thing. The point I’m making here is IF they believe that having a better functioning EA community will give you better results, then they (maybe) ought to want to break up Open Phil into a couple of organisations.
Yeah I didn’t mean to accuse you of having forgotten that point (the language was a little mercenary but I assumed you weren’t being literal), I just think it’s worth reminding forum users in general to keep this in mind throughout any further discussion.
I also had this reaction, and I think it was mostly just the phrasing. “Break up OP” suggests that we have the power or right to do that, which we definitely don’t. I think if the post said “OP could consider breaking itself up” it wouldn’t sound like that.
OpenPhil donates to the EA Infrastructure Fund, which is kind of like this. They also have funds for regrantors like this.
Sort of. Except it’s a donation, so the latitude for difference is much less. EA funds is also less established in terms of investigations, coherent worldviews etc (not that individual grnatmakers don’t, but it’s not the same as the Rigour of openphil) which means EA Funds looks not too dissimilar to OpenPhil 2.0 I think
I think the proposal to have significant re-grantors is a more approachable way of achieving something similar, in that it delegates control of some funds.
I think this is a less bad idea than some think but I think it would require people that Moskovitz and Tuna trust, most of whom.. work at OpenPhil. Unsure therefore that this would result in very different decision making than present.
A really good thing about FTX was having another pole of funding.
I agree, it’s not useful if it’s just OpenPhil2.0. I’d hope Cari and Dustin would work out a way it would not be that if they wanted to
Sure but it seems if we are suggesting, then we should suggest that too right? And that does seem to be a core problem.
Thanks for this post, Will. I believe you’ve touched on many points that many of us have been pondering. I’ve translated it into Spanish, as I feel it’s relevant to the entire community.
I want to just appreciate the description you’ve given of interaction responsibility, and pointing out the dual tensions.
On the one hand, wanting to act but feeling worried that by merely getting involved you open yourself up to criticism, thereby imposing a tax on acting even when you think you would counterfactually make the situation better (something I think EA as a concept is correctly really opposed to in theory).
On the other hand, consequences matter, and if in fact your actions cause others who would have done a better job not to act, and that’s predictable, it needs to be taken into account. This is all really tough, and it bites for lots of orgs or people trying to do things that get negative feedback, and it also bites for the orgs giving negative feedback, which feels worth bearing in mind.
Most philanthropy is not from billionaires, so the fact that most EA philanthropy is from billionaires means that EA has been unusually successful at recruiting billionaires. This could continue, or it could mean revert. So I do think there is hope for more funding diversification.
That’s true, but it is pretty fat-tailed. These statistics don’t break down by wealth, but you’ve got about one-quarter of US charitable giving coming from foundations and corporations.
The individuals slice isn’t broken down. However, we can suspect that the ~ 30% of total contributions given to religious organizations came predominately from individuals, meaning that the concentration of non-religious charitable giving is probably higher than these numbers would suggest.
Thanks for the link. This shows that 3% of global wealth is in billionaires. Though richer people generally give a larger percent of their income, it’s not clear they give a larger percent of their wealth. This is because many people with near zero wealth still have significant income, and still donate to charity. So I would guess ~3% of donations from individuals/foundations would be from billionaires. Corporations you point out are 6% of the US total. It’s not clear to me how to classify this, but generously you could go with market capitalization. I would guess a minority of corporate donations would come from companies worth more than $1 billion. So that would mean something like 5% of donations coming from billion-dollar individuals/companies. Whereas EA might be 75%? But it would be great to get actual breakdowns on donations by income/wealth outside EA. Will makes the claim “it’s more or less inevitable that much or most funding in EA will come from a small handful of donors.” To me this implies under 10. But if EA became a mass movement like environmentalism, then I think this would not be true, at least for “most of funding coming from <10 donors.” If there were more than 10 donors giving the majority, I think people would be significantly less worried about centralization of funding in EA.
I think Will’s statement is mostly correct with the background of who the existing donors are. How much billionaires (and near-billionaires) donate as a percentage of their wealth in general is much less important to assessing his claim than what the specific billionaires and near-billionaires on board intend to donate.
Even for GiveWell, which has a significantly easier road to being a mass movement than most of EA/EA-adjacent work, over half of its revenue came from 18 donors out of 41,862 [p. 18 of https://files.givewell.org/files/metrics/GiveWell_Metrics_Report_2021.pdf] even before one considers that over half of its impact came from direct-to-charity grants from Open Phil not included in those numbers. Over half of the total donors were in the under-$1,000 bucket, so it’s not that small donors weren’t present.
Of course, the centralization of funding would be less pronounced in a true mass movement. But mass movements take a lot of time and energy to cultivate all those small/mid-size donors . . .
I agree that Will’s statement is correct for the near term. But Will also said that his vision is that, like science is the agreed way of getting to the truth, EA should be the agreed way of getting to the good. I think that would imply that EA has become a mass movement.
As FTX was imploding, Will wrote on Twitter “If FTX misused customer funds, then I personally will have much to reflect on.” It now seems very clear that FTX did misuse customer funds (1, 2), but to my knowledge Will hasn’t shared any of his reflections publicly, beyond that initial Twitter thread. It seems odd to me for him to offer thoughts on the best way forward for the movement without acknowledging or having reckoned in a substantive way with his own role in the largest challenge faced by that movement to date.
If Will has published a post-mortem or analysis of what went wrong and I’ve missed it, I do apologise and will retract this comment (and would appreciate a link).
See here for more context.
Thanks! Will writes “The independent investigation commissioned by EV is still ongoing, and the firm running it strongly preferred me not to publish posts on backwards-looking topics around FTX while the investigation is still in-progress. I don’t know when it’ll be finished, or what the situation will be like for communicating on these topics even after it’s done.”
To be honest, I don’t find this particularly inspiring: it feels a lot like a cop-out. I also think that in this post he could have included a disclaimer about his own track record of errors of judgement, without going into detail about those errors. The fact that he chose not to is disappointing.
I think this is a little unfair to Will. If an independent investigation asks you not to discuss something then presumably this is because they worry that you speaking would interfere with their investigation (perhaps they think it’s valuable to get independent views of what happened, rather than views informed by dialogue between different parties).
To my mind, if Will refused to heed a request from an independent investigation this would be strong evidence that he hadn’t learned the lessons of FTX (that he hadn’t learned the importance of good governance norms). The fact that he’s heeding the request, despite clearly wanting to speak out, I think is at least weak evidence that Will has learned valuable lessons here. I certainly think it’s unfair to call this a cop out.
Agreed, although it does raise potential questions for me about the firm and/or investigation. A request to avoid making public statements to avoiding tainting investigative leads of a small-to-midsize organization is easier to justify early on than many months in. Hard to assess how reasonable it is without inside knowledge though.
What track record? He says “I wish I’d been far less trusting of Sam and those who’ve pleaded guilty”—what else are you thinking of?
One strong −5 disagree-vote...oh yeah, you’re right, that link didn’t provide any more context haha, silly me.
Very well written and eye-opening post, thanks Will!
Something I’d be really excited to see, and that I think would be really useful for community builders when doing outreach/speaking to people very new to the movement:
How is this on the decentralisation list?
Yes, I generally think of things like a meritocratic officer corps as being a pro-centralisation move, vs relying on personal connections and military aristocrats with independent sources of legitimacy.
I think this is related to: “Distance myself from the idea that I’m “the” face of EA...Trying to correct this will hopefully be a step in the direction of decentralisation on the perception and culture dimensions....I’m also going to try to provide even more support to other EA and EA-aligned public figures, and have spent a fair amount of time on that this year so far. ”
1. Overall thoughtful and helpful, but one major error which I hope you will be relieved to know about, and I’m sure others will:
>Assuming I’m right that, currently, perception doesn’t match reality, it means the core projects and people in EA should communicate more about what they are and are not taking responsibility for.
I think this is very unlikely to be successful, and places a huge unwelcome “should” on a bunch of busy EAs, some of who won’t be good at doing PR/comms/promo work on their own role.
It would be much better, easier and quicker to have a comms team or podcast or youtube channel with the specific responsibility to build an accurate perception, namely that EA is a fleet or regatta, and not a supertanker.
I would love to hear interviews in a podcast of senior EAs focused on their role and responsibilities and differences between perception and reality, what gaps they see that do-ers could fill.
2. In separate comment
Don’t you think there are some minimal values that one must hold to be an Effective Altruist? E.g. Four Ideas You Already Agree With (That Mean You’re Probably on Board with Effective Altruism) · Giving What We Can.
It seems to me that there are some core principles of Effective Altruism such that if someone doesn’t hold them, I don’t think it’d make sense to consider them an Effective Altruist.
To be clear, I don’t disagree that anyone can call themselves part of the EA movement. I’m more wondering whether I would/should call someone an Effective Altruist if, for example, they don’t think it’s important to help others.
On diversity, the biggest deficit is language and all continents diversity, and with that come both conscious and unconscious limitations. This could be addressed, through:
(a) existing and future granting programmes
(b) real commitment to acceleration in Asia-Pacific, Africa, Latin America etc
… maybe micro-offices in those continents?
(c) job ad placements “always in UN languages and Global South before english” to give non-native English speakers a fair chance / time to translate etc
(d) translation of headlines of important news / tweets into UN security council languages
(e) I have more but it’s late, call me?