The dangers of high salaries within EA organisations

Epistemic status: Quite speculative but I think these risks are important to talk about.

Important edit to clarify what might not have been obvious before: I don’t think EA orgs should pay “low” salaries. I use the term ‘moderate’ rather than low because I don’t think paying low salaries is good (for reasons mentioned by both myself and Stefan in the comments). Instead, I’m talking about concerns with more orgs paying $150,000+ salaries (or 120%+ of market rate) on regular basis, not paying people $80,000 or so. Obviously exceptions apply like I mentioned to Khorton below but it should be at least the point where everyone’s (and their families/​dependents) material needs can be met.

There’s been a fair bit of discussion recently about the optics and epistemics related with EA accruing more money. However, one thing I believe hasn’t been discussed in great detail is the potential issue of worsening value alignment with increasing salaries.

In summary, I have several hypotheses about how this could look:

  1. High salaries at EA orgs will draw people for whom altruism is less of a central motivation, as they are now enticed by financial incentives.

  2. This will lead to reduced value alignment to key EA principles, such that employees within EA organisations might have less of a focus on doing the most good, relative to their other priorities.

  3. If left unmitigated, this could lead to gradual value drift within EA as a community, [1] where we are no longer truly focused on maximising altruistic impact.

The trade-offs

First and foremost, high salaries within EA orgs obviously can be good. There are clear benefits e.g. attracting high-calibre individuals that would otherwise be pursuing less altruistic jobs, which is obviously great. In addition, even the high salaries we pay might still be much below what we perceive the impact of these roles to be, so it’s still worth it from an altruistic impact perspective. However, I generally think there’s been little discourse about the dangers of high salaries within EA, so I’ll focus on those within this piece.

The pull of high salaries

One core claim is that EA organisations offering high salaries will attract less altruistically-minded people relative to most people currently working at EA orgs (which I’ll call “EAs now” as a shorthand). This is not to say that these people attracted by higher salaries won’t be altruistic at all – but I believe those with strong altruistic motivations would be happy with moderate salaries, as their core values focus on impact rather than financial reward. As high salaries become more common across EA, due to increased funding and the importance of attracting the most talented individuals, this issue could become widespread in the EA community.


To make my point more clear, I’ve sketched out some diagrams below of how salaries might affect the EA community in practice. There are obviously many more factors that people take into account when searching for a job (e.g. career capital, personal fit) but I’m assuming these will be roughly constant if salaries change.[2]


Whilst oversimplified, I anticipate if we hire the median applicant whilst offering moderate salaries vs high salaries, the applicant from the moderate salary pool will have a higher altruistic inclination. In addition, as noted by Sam Hilton here, higher salaries might actually deter some highly value-aligned EAs. This is because jobs with higher salaries have higher chances of being filled by good candidates and therefore have lower counterfactual impact (which matters more to highly altruistic people). Of course we might avoid the issue of selecting less-value aligned people by having water-tight recruitment processes that strongly select for altruism. However, I don’t think this is straightforward. For example, how does one reliably discern between 710 altruism and 810 altruism? Nevertheless, I’ll touch on this issue further below in how I think we can mitigate this potential problem.

The dangers of value misalignment in organisations

One common rebuttal might be “Not everyone in the EA org has to be perfectly value-aligned!”. I’ve certainly heard lots of discussions about whether operations folks need to be value-aligned, or whether highly competent non-EAs could also do operations within EA organisations. My view is that almost all employees within organisations need to be strongly value-aligned, for two main reasons:

  1. They will make object-level decisions about the future of your work, based on judgement calls and internal prioritisation

  2. They will influence the culture of your organisations

Object-level work

It seems pretty clear that most roles, especially within small organisations (as most EA orgs are), will have somewhat significant influence on the work carried out. Some exaggerated examples could be:

  • A researcher will often decide which research questions to prioritise and tackle. A value-aligned one might seek to tackle questions around which interventions are the most impactful, whereas a less value-aligned researcher might choose to prioritise questions which are the most intellectually stimulating.

  • An operations manager might make decisions regarding hiring within organisations. Therefore, a less value-aligned operations manager might attract similarly less value-aligned candidates, leading to a gradual worsening in altruistic alignment over time. It’s a common bias to hire people who are like you which could lead to serious consequences over time e.g. a gradual erosion of altruistic motivations to the point where less-value aligned folks could become the majority within an organisation.

In essence, I think most roles within small organisations will influence object-level work, involving prioritisation and judgement calls so it seems unrealistic to imply that value-alignment isn’t important for most roles.[3]

Cultural influence

This is largely based on my previous experience but I think a single person can have a significant impact on organisational culture. A common phrase within the start-up world is “Hire Slow, Fire Fast”—and I think this exists for a reason. Non-value aligned employees can lead to conflict, loss of productivity and internal issues that might significantly delay the actual work of an organisation.

In the worst-case, similar to the operations manager example, less-than-ideal values can spread around the organisation, either by hiring or social influence. This could lead to a rift with two parties having non-compatible views, where trust and teamwork significantly breaks down. Whilst anecdata, I’ve experienced this myself, where a less value-aligned group of 2 individuals significantly detracted from the work of 10 people, to the point where it almost forced some of our highest-performing staff to leave, due to the intense conflict and inability to execute on our goals.

Rowing vs steering

Another useful example to consider this issue is Holden’s framework about Rowing, Steering, Anchoring, Equity and Mutiny.

In this case, I’ll define the relevant terms as:

  • Rowing = Help the EA ship (i.e. EA community) reach its current destination faster

  • Steering = Navigate the EA ship to a better destination than the current one

There are clear benefits to paying higher salaries, namely attracting higher calibre people to work on the world’s most pressing problems. This in turn could mean we actually solve some of these global challenges sooner, potentially helping huge numbers of humans or nonhuman animals. However, there exists a trade-off too. We might be rowing faster with more-talented individuals, but are we going in the right direction?

If what I’ve hinted above is directionally correct, that higher salaries could lead to worsening value-alignment to altruism within EA organisations, then this has the potential to steer the entire EA community off-course. Continuing with the wonderful rowing vs steering analogy, one can see how this might look graphically in the diagram below. In short, although higher salaries might mean more talented people, meaning more efficient work, it might ultimately be taking off the optimal trajectory. On the other hand, moderate salaries and high value alignment could mean slower progress, but ultimately progress toward maximally altruistic goals.

Some caveats to this might be if you have particularly short AI timelines, or otherwise think existential risk levels are extremely high. In this case, it might be worth sacrificing some level of internal value alignment purely to ensure the survival of humanity.

What is high-salary and what is moderate?

I’m not exactly sure of the line between “moderate” and “high” salaries, and interpreting this will likely be based on the reader’s personal experiences. There have been some numbers thrown around for rough salary caps for EA orgs, such as 80% of for-profit rate for a similar role. However, as Hauke also mentions above, I believe this 80% rate should be progressive rather than flat, as it seems somewhat excessive to pay employees of EA orgs $400,000 even if they could reasonably command $500,000 in the for-profit world, which is quite plausible for certain industries (e.g. software development, consulting, quant trading, etc.). There’s obvious caveats to this e.g. if increasing the salary of Open Phil’s CEO from $400k (no idea if this is true) to $4,000,000, this could be worth it if they more effectively allocate their ≈$500m/​year grantmaking by just 1%. I’m slightly worried about EA making Pascal’s mugging based arguments using motivated reasoning to justify extremely high salaries, but not sure how to square this with the possibility of real increased expected value.[4]

Another way to set salaries could be based on negligible marginal returns to emotional wellbeing from higher salaries, where one study suggests a cut-off around $75,000 and one implying it is much higher than this. [5]

How might we mitigate some of these dangers?

  • Salary benchmarks up to 80% of market rate as suggested above, increasing in a progressive fashion such that the percentage of market rate falls as the salaries increase (e.g. it could be 60% for a $200,000 market salary, leading to a $120,000 non-profit salary).[6]

    • I’m highly unsure about this, as some EA orgs do currently pay above this rate e.g. this comment highlighting an EA org paying ≈150% of the market rate. But clearly there is a need for talented EA operations folks, so getting in a great operations hire who might be worth 5-10x their salary could be worth it from an altruistic impact perspective if it attracts candidates that are much better.

  • Non-financial perks for employees such as a training and development budget, a mental health budget, 10% of time to be spent on up-skilling, catered food, etc.

  • Performance-based pay (idea from Hauke’s great comment here) based on impact generated.

  • Build intrinsic motivation using ideas from self-determination theory—by allowing employees to have greater autonomy, relatedness and competence within the workplace. [7] Some tangible tips are listed here.

  • Robust hiring processes that can filter out people with less than what we consider sufficient value alignment, so that we only hire people who meet our current bar for altruism (see diagram below). This is easier said than done, and I’m sure EA orgs are already trying very hard to select for the most value-aligned people. However, I do think this will become increasingly important if EA salaries keep rising, putting additional pressure on recruiters to discern between a 710 aligned individual and a 810 aligned individual. Whilst this distinction might not seem relevant on a micro-level, I believe these will compound to become quite important on a community level.

  • Anecdotally, I’ve seen organisations being (or promising to be) hugely impactful serve as a substitute for financial incentives.

    • I saw this whilst working for Animal Rebellion and Extinction Rebellion, doing grassroots movement building. Whilst an extreme example, people (including myself for 2.5 years) often work for less than UK minimum wage (I was earning approx. £800/​month) because we felt that what we were doing was so important and an essential contribution to the world, that it needed to happen regardless of our financial gain.

    • Obviously I’m not saying that EA organisations should promise to deliver on things they’re not, but I think this is an example where organisations that do deliver huge impact can afford to pay lower salaries, as the attraction of having a huge positive impact on the world should be sufficient to entice most altruistically motivated and talented people.

  • The solutions I’ve listed above probably aren’t great but it’s something we have to watch out for and I hope some people are thinking about these problems and optimal compensation at EA orgs! Would definitely be keen to hear more ideas below.

Credit

Thanks to Akhil Bansal, Leonie Falk, Sam Hilton and others for comments on this. All views and mistakes are my own of course.

  1. ^

    Value drift within EA has already been spoken about to some degree here, here and here.

  2. ^

    My diagrams are also simplified in other ways e.g. it’s not to scale and the real distributions are almost certainly not random ovals I drew.

  3. ^

    I’m sure there are exceptions to this rule, but I think these are the minority of cases.

  4. ^

    For example, reducing existential risk by just 0.00000000001% could theoretically be worth saving 10^20 lives, at which most EAs should rationally pay an extremely high price for. Therefore one could use these calculations to justify very large ($1-10+ million) salaries at basically any longtermist org for any role.

  5. ^

    H/​T Leonie Falk

  6. ^

    Obviously this runs into the same problems about trade-offs between value alignment and attracting more talented people. It’s not obvious how one should balance these considerations, but this is an iteration of one possible solution.

  7. ^

    I’ve only just learned about self-determination theory and how it can be applied to workplaces, so I might be totally naive to any shortcomings.