The biggest risk of free-spending EA is not optics or motivated cognition, but grift

In EA and the current funding situation, Will MacAskill tried to enumerate the “risks of commission” that large amounts of EA funding exposed the community to (i.e., ways that extra funding could actually harm EA’s impact). Free-spending EA might be a big problem for optics and epistemics raised similar concerns.

The risks described in these posts largely involve either money looking bad to outsiders, or money causing well-intentioned people to think poorly despite their best effort. I think this misses what I’d guess is the biggest risk: the risk that large amounts of funding will attract people who aren’t making an effort at all because they don’t share EA values, but instead see it as a source of easy money or a target of grift.

Naively, you might think that it’s not that much of a problem if (say) 50% of EA funding is eaten by grift—that’s only a factor of 2 decrease in effectiveness, which isn’t that big in a world of power-law distributions. But in reality, grifters are incentivized to accumulate power and sabotage the movement’s overall ability to process information, and many non-grifters find participating in high-grift environments unpleasant and leave. So the stable equilibrium (absent countermeasures) is closer to 100% grift.

The basic mental model

This is something I’ve thought about, and talked to people about, a fair amount because an analogous grift problem exists in successful organizations, and I would like to help the one I work at avoid this fate. In addition to those conversations, a lot of what I go over here is based on the book Moral Mazes, and I’d recommend reading it (or Zvi Mowshowitz’s review/​elaboration, which IMO is hyperbolic but directionally correct) for elaboration.

At some point in their growth, most large organizations become extremely ineffective at achieving their goals. If you look for the root cause of individual instances of inefficiency and sclerosis in these orgs, it’s very frequently that some manager, or group of managers, was “misaligned” from the overall organization, in that they were trying to do what was best for themselves rather than for the org as a whole, and in fact often actively sabotaging the org to improve their own prospects.

The stable equilibrium for these orgs is to be composed almost entirely of misaligned managers, because:

  • Well-aligned managers prioritize the org’s values over their own ascent up the hierarchy (by definition), so will be out-advanced by misaligned managers who prioritize their own ascent above all.

  • Misaligned managers will attempt to sabotage and oust well-aligned managers because their values are harder to predict, so they’re more likely to do surprising or dangerous things.

  • Most managers get most of their information from their direct reports, who can sabotage info flows if it would make them look bad. So even if a well-aligned manager has the power to oust a misaligned (e.g.) direct report, they may not realize there’s a problem.

For example, a friend described a group inside a name-brand company he worked at that was considered by almost every individual contributor to be extremely incompetent and impossible to collaborate with, largely as a result of poor leadership by the manager. The problem was so bad that when the manager was up for promotion, a number of senior people from outside the group signed a memo to the decision-maker saying that approving the promotion would be a disaster for the company. The manager’s promotion was denied that cycle, but approved in the next promotion cycle. In this case, even despite the warning sign of strong opposition from people elsewhere in the company, the promotion decision-maker was fed enough bad information by the manager and allies that he made the wrong call.

Smaller organizations can escape this for a while because information flow is simpler and harder for misaligned managers to sabotage, and because the organization doesn’t have enough resources (money or people) to be a juicy target. But as they get larger and better-resourced, they tend to fall into the trap eventually.

The EA movement isn’t exactly like a corporation, but I think analogous reasoning applies. Grifters are optimizing only to get themselves money and power; EAs are optimizing for improving the world. So, absent countermeasures, grifters will be better at getting money and power. Grifters prefer working with other grifters who are less likely to expose their grift. And grifters will be incentivized to control and sabotage the flow of information in EA, which will make it increasingly hard to be a non-grifter.

Evidence

The EA community is already showing some early signs of an increase in misalignment:

  • I’ve heard several people mention hearing third parties say things like “all you have to do is say a few of the right words and you get [insert free stuff here].”

  • I recently spoke to an EA-ish person who received substantial funding from one or more very large EA donor(s). They themselves acknowledged that their case for impact according to the donors’ stated values/​cause prioritization was tenuous at best. In this case, I think their work will still have an extremely positive impact on the world if it succeeds and could be considered EA by other values, so it’s not like the money was wasted, but at least in this case it suggests that the large donor(s) were fairly exploitable.

I have vague recollections of hearing a lot more examples of this but can’t reconstruct them well enough to include at the moment because I haven’t really been following EA community gossip very closely. I’d encourage people to add their own data points in the comments.

So far, I can recall the EA community expelling one grifter (Intentional Insights). I agree with shlevy’s comment on that post:

While I understand the motivation behind it, and applaud this sort of approach in general, I think this post and much of the public discussion I’ve seen around Gleb are charitable and systematic in excess of reasonable caution.

There’s a huge offense-defense asymmetry right now: it’s relatively easy for grifters to exploit EA, and sucks up enormous amounts of time for the grift to be conclusively discovered/​refuted. If this continues, it’s going to be hard for EA to protect itself from the influx of people looking for easy money and power.

Conclusion

I think more funding is still probably great on net, I’m just worried that we’re not paying enough attention or acting fast enough on the grift problem.

I wanted to add some suggested ways to mitigate it, but I’m out of time right now and anyway I’m a lot less confident in my solutions to this than in the fact that it’s a problem. So maybe discuss potential mitigations in the comments :)