A libertarian socialist’s view on how EA can improve

Following up on my post criticising liberal-progressive criticisms of EA, I’m bringing you suggestions from further to the left on how EA could improve.

In general, libertarian socialism opposes the concentration of power and wealth and aims to redistribute them, either as a terminal goal or an instrumental goal.

This post is divided into 3 sections—meta EA, EA interventions and EA Philosophy.

Meta EA

Most of the interventions I propose are improvements in EA’s institutional design and safeguards, which should, in theory, increase the chances that resources are spent optimally.

Whether we are spending resources optimally is near-impossible to measure and evaluate, so we have to rely on theory. Regardless of whether my proposed interventions work or fail, there would be no evidence for it.

EA relies on highly-uncertain, vulnerable-to-motivated-reasoning expected value (EV) calculations and is *no less* vulnerable to motivated reasoning than other ideologies. Because it is not possible to *detect* suboptimal spending, we should not wait for strong evidence of mistakes or outright fraud and corruption to make improvements, and we should be willing to bear small costs to reap long term benefits.

EA priors on the influence of self-serving biases are too weak

In my view, EAs underestimate the influence that self-serving biases play in imprecise, highly uncertain expected value (EV) calculations around decisions such as buying luxurious conference venues, lavish community building expenditure, funding ready meals and funding Ubers, leading to suboptimal allocation of resources.

When concerns are raised, I notice that some EAs ask for “evidence” that decisions are influenced by self-serving biases. But that is not how motivated reasoning works—you will rarely find *concrete evidence* for motivated reasoning. Depending on the strength of self-serving biases, they could influence expected value calculations in ways that justify the most suboptimal, most luxurious purchases, with no *evidence* of the biases existing.

Concrete suggestions for improving EV calculations which I also discussed in another post:

  1. Have two (or more) individuals, or groups, independently calculate the expected value of an intervention and compare results

  2. In expected value calculations, identify a theoretical cost at which the intervention would no longer be approximately maximising expected value from the resources

  3. Keep in mind that EA aims to make decisions that approximately maximise expected value from a set of resources, rather than just make decisions which just have net positive expected value

EAs underestimate the importance of conflicts of interest and the distribution of power inside EA

There is a huge amount of overlap across boards, governance and leadership of key EA organisations, increasing the risk of suboptimal allocation of resources, since in theory, there is a high risk of funders giving too much funding to other organisations with connected leadership.

Although I think a certain degree of coordination via events such as the Leaders Summit is good, a greater degree of independence between institutions may help reduce biases and safeguard against misallocation.

Concrete suggestion:
I would recommend that individuals are only allowed to hold leadership, board or governance positions in one EA organisation each. Beyond reducing risks of bias in funding allocation, this would also help to distribute power at the top of EA, safeguarding against individual irrationality and increasing diversity of thought, which may generate additional benefits.

If this seems like a bad idea, try the reversal test: do you think EA orgs should become more integrated?

EDIT 1 at 43 upvotes: Another potential intervention could be to split up existing organisations into more organisations. I can’t think of an organisation where this would be obviously suitable so am not advocating for this happening right now, but I think it would make sense for organisations to split as they grow further in the future.

EA organisations underinvest in transparency

Many EA organisations do not write up funding decisions or do this with massive delays due to low capacity. This weakens safeguards against misallocation of resources by making it more difficult for the community to scrutinise grants and detect conflicts of interest, biassed reasoning or outright corruption.

Previous discussion of this on the EA Forum has indicated what I consider to be overconfidence in decision making by funders. Others have implied that a low probability of failures currently happening may justify not investing more in transparency.

Firstly, as I often say, funding decisions in EA rely on highly uncertain EV calculations which are highly prone to motivated reasoning. The probability of biased reasoning and correctable misallocation of resources does not seem low. The probability of *outright* corruption, on the other hand, does seem low.

But importantly, the function of transparency is primarily as a long-term safeguard and disincentive against these things. *Detecting* poor reasoning, bias and corruption is only a secondary function of transparency.

There are costs in implementing transparency. I don’t think EA should aim to maximise transparency, eg—by publishing grant write-ups the same day grants are made, but transparency should be increased from where it currently is. I think the costs of improving transparency are worth bearing.

If this seems implausible, try the reversal test: do you think EA orgs should invest less in transparency than they do now, to allow faster grantmaking?

I made a separate post about this recently: https://​​forum.effectivealtruism.org/​​posts/​​G9RHEcHMLguGJY7uP/​​you-should-have-capacity-for-more-transparency-and

More on this topic:

https://​​forum.effectivealtruism.org/​​posts/​​4iLeA9uwdAqXS3Jpc/​​the-case-for-transparent-spending

https://​​forum.effectivealtruism.org/​​posts/​​sEpWkCvvJfoEbhnsd/​​the-ftx-crisis-highlights-a-deeper-cultural-problem-within

https://​​forum.effectivealtruism.org/​​posts/​​PkFenL3DcEJDjERwY/​​ftx-prob-related-strongly-recommending-creating-an-internal

EDIT 2 at 108 Upvotes:

Concretely suggestion:

Grantmaking orgs should set and adhere to targets of writing up the reasoning behind every approved grant on the EA Forum within a certain timescale (eg − 1 month)

EA hasn’t sufficiently encouraged entrepreneurship-to-give as a strategy to diversify funding

“Diversify donors” is an obviously intractable solution to power concentration in EA—EA isn’t exactly turning large donors away to protect Dustin Moskovitz’s influence.

But as I have written elsewhere, EA earning-to-give discussions have focused too much on high paying jobs, and not enough on entrepreneurship-to-give, which may be more likely to generate more large donors to distribute power away from Open Philanthropy, Cari Tuna and Dustin Moskovitz. (I also think entrepreneurship-to-give will just generate more money overall).

More on this topic:

https://​​forum.effectivealtruism.org/​​posts/​​cdBo2HuXA5FJpya4H/​​entrepreneurship-etg-might-be-better-than-80k-thought

https://​​forum.effectivealtruism.org/​​posts/​​JXDi8tL6uoKPhg4uw/​​earning-to-give-should-have-focused-more-on-entrepreneurship

EA grant making decisions are too technocratic

This section may seem similar to Luke Kemp and Carla Zoe Kremer’s criticisms, but I don’t think they properly explore the downsides of democratising things. Exploring the downsides of democratising things doesn’t mean being opposed to democratising things, but does help us do it in the best way.

As I wrote in a previous post, it can help to view decision-making structures on a scale from highly technocratic to populist. We should not be trying to make the world as populist as possible and governments should not be making every decision using referendums and citizens assemblies. Public opinion is notoriously unstable, highly susceptible to influence by politicians, corporations and the media, sometimes in conflict with facts supported by strong evidence, and historically, has been extremely racist and homophobic.

But I think EA funding decisions are on the extreme technocratic end of the scale at the moment, and should be made less technocratic. I think this would improve long-term impact by benefiting from the wisdom of the crowd, incorporating diversity of thought, reducing bias and improving accountability to the community. It would also have the instrumental value of making community members feel empowered, which could help retain EAs.

Concrete suggestions:

  1. For grant decisions where expected value calculations put projects just above or just below an EA funder’s funding bar, the decision on whether or not to fund the project should be put to a vote on the EA forum (restricted to users with a certain amount of karma), or a new voting platform should be created for people accepted to EAG.

  2. Instead of individual EAs donating to EA Funds, a pooled fund should be created for individual EAs to donate to. Projects could apply to this fund and individual donor EAs could then vote on which grants to approve, similar to a DAO (digital autonomous organisation) or a co-operative. Again, this can be restricted to people with a certain amount of karma on the EA forum or people accepted to EAG to protect the EA character of the fund and ensure that it doesn’t all get spent on problems in Western countries or problems which already receive lots of attention.

EA Interventions

EAs underestimate the tractability of party politics

EA has accepted “politics is the mind killer” dogma from the rationalist community too strongly and has become too discouraged from Carrick Flynn’s run for office.

What’s intractable and of low expected value is focusing on making your side win and throwing lots of money at things.

But if you’re in Western Europe, interested in party politics and extroverted, there’s a good chance that this is the highest EV career for you, especially when you’re trying to co-ordinate with other EAs and avoid single-player thinking. If you’re right leaning, then I’m extra confident in this, because most smart, educated young people join centre-left parties and most EAs are centre-left. Importantly, you can do this in a voluntary capacity alongside work or studies. Network, seek to build coalitions supporting causes, be patient and be on the lookout for political opportunities to promote overseas development assistance, climate change mitigation, pro-LMIC foreign policy and trade policy, investments for pandemic preparedness and farmed animal welfare.

EDIT 2 at 108 votes:

Concrete suggestion: Regular EAG and EAGx talks, workshops and meetups focused on party politics.

EAs underestimate the expected value of advocacy, campaigning and protest

I’m just going to link https://​​www.socialchangelab.org here.

EDIT 2 at 108 votes:


Concrete suggestion: Regular EAG and EAGx talks, workshops and meetups focused on advocacy, campaigning and protest.

EAs undervalue distributing power

Libertarian socialists see distributing power as an end in itself. Most EAs do not. But EAs underestimate the instrumental value of distributing power and making it easier for people to advocate for improvements to their welfare themselves over the long-term, instead of having to rely on charity.

An example of an intervention that is currently looking tractable is campaigning for the African Union to be given a seat at the G20. In the long-term, EAs could campaign for disenfranchised groups such as foreigners, children, future generations and animals to be given political representation in democracies.

EA Philosophy

EAs depoliticise distance in charitable giving

EA philosophy suggests that rich Westerners don’t donate to charities abroad because they are further away, but glosses over key factors other than distance—nationalism, localism and racism.

Many EAs don’t place much value on the distribution of utility

This is more of just a disagreement, but libertarian socialists very much inherently care about how power and welfare is distributed across individuals, while many EAs do not. That being said, EA does seem to instrumentally value equality in the pursuit of maximising welfare.

EA undervalues rights

Libertarian socialists place inherent value on strengthening rights and distributing power, while EAs only value this instrumentally. But I think EAs underestimate the instrumental value of strengthening rights too. Valuing rights more would probably cause the expected value of political campaigns to influence legislation look higher, especially in the context of farmed animal welfare and international development.

EAs underestimate uncertainty in cause prioritisation and grantmaking

I discuss uncertainty in EA in more detail here but in a different context.

EA relies on highly-uncertain, imprecise, vulnerable-to-motivated-reasoning expected value (EV) calculations. These calculations often use probabilities derived from belief which aren’t based on empirical evidence. Taking 80 000 Hours’ views on uncertainty literally, they think it is plausible for land use reform to be as pressing as biosecurity.

Work in the randomista development wing of EA and prioritisation between interventions in this area is highly empirical, able to use high quality evidence and unusually resistant to irrationality. Since this is the wing of EA that initially draws many EAs to the movement, I think it can give them the misconception that decision making across EA is also highly empirical and unusually resistant to irrationality, when this is not true.

I think the underestimation of uncertainty in decision making may be why EAs are overconfident in decision making, undervalue transparency and the distribution of power within EA, and may be why EAs underestimate the effects of self-serving biases.

EA expectations of extinction may mean that EAs undervalue long-term benefits of interventions

Many of the interventions I have proposed are intended to generate long-term benefits to EA while imposing short-term costs, because the risk of severe misallocations of resources increases over time. I think EAs expecting to go extinct from AGI in the next 20-30 years causes EAs to value these interventions less than I do.