Thoughts on EA, post-FTX

Introduction

This is a draft that I largely wrote back in Feb 2023, how the future of EA should look, after the implosion of FTX.

I. What scandals have happened and why?

1.

There are several reasons that EA might be expected to do harm and have scandals:

a) Bad actors. Some EAs will take harmful actions with a callous disregard for others. (Some EAs have psychopathic tendencies, and it is worth noting that that utilitarian intuitions correlate with psychopathy.)

b) Naivete. Many EAs are not socially competent, or streetsmart, enough to be able to detect the bad actors. Rates of autism are high in EA, but there are also more general hypotheses. Social movements may in-general by susceptible to misbehaviour, due to young members having an elevated sense of importance, or being generally impressionable. See David Chapman on “geeks, mops and sociopaths” for other hypotheses.

c) Ideological aspects. Some ideas held by many EAs—whether right or wrong, and implied by EA philosophy or not—encourage risky behaviour. We could call these ideas risky beneficentrism (RB), and they include:

i. High risk appetite.

ii. Scope sensitivity

iii. Unilateralism

iv. Permission to violate societal norms. Violating or reshaping an inherited morality or other “received wisdom” for the greater good.

v. Other naive consequentialism. Disregard of other second-order effects

There are also hypotheses that mix or augment these categories: EAs might be more vulnerable to generally psychopathic behaviour due to that kind of decision-making appearing superficially similar to consequentialist decision-making

2.

All of (a-c) featured in the FTX saga. SBF was psychopathic, and his behaviour included all five of these dimensions of risky beneficentrism. The FTX founders weren’t correctly following the values of the EA community, but much of what would be warning signs to others (gambling-adjacency, the Bahamas, lax governance), to us just looked like someone pursuing a risky, scope-sensitive, convention-breaking, altruistic endeavour. And we EAs outside FTX, perhaps due to ambition and naivete, supported these activities.

3.

Other EA scandals, similarly, often involve multiple of these elements:

  • [Person #1]: past sexual harassment issues, later reputation management including Wiki warring and misleading histories. (norm-violation, naive conseq.)

  • [Person #2]: sexual harassment (norm-violation? naive conseq?)

  • [Person #3] [Person #4] [Person #5]: three more instances of crypto crimes (scope sensitivity? Norm-violation, naive conseq.? naivete?)

  • Intentional Insights: aggressive PR campaigns (norm-violation, naive conseq., naivete?)

  • Leverage Research, including partial takeover of CEA (risk appetite, norm-violation, naive conseq, unilateralism, naivete)

  • (We’ve seen major examples of sexual misbehaviour and crypto crimes in the rationalist community too.)

EA documents have tried to discourage RB, but this now seems harder than we thought. Maybe promoting EA inevitably leads to significant amounts of harmful RB.

4.

People have a variety of reasons to be less excited about growing EA:

EA contributed to a vast financial fraud, through its:

  • People. SBF was the best-known EA, and one of the earliest 1%. FTX’s leadership was mostly EAs. FTXFF was overwhelmingly run by EAs, including EA’s main leader, and another intellectual leader of EA.

  • Resources. FTX had some EA staff and was funded by EA investors.

  • PR. SBF’s EA-oriented philosophy on giving, and purported frugality served as cover for his unethical nature.

  • Ideology. SBF apparently had an RB ideology, as a risk-neutral act-utilitarian, who argued a decade ago why stealing was not in-principle wrong, on Felicifia. In my view, his ideology, at least as he professed it, could best be understood as an extremist variant of EA.

Now, alongside the positive consequences of EA, such as billions of dollars donated, and minds changed about AI and other important topics, we must account for about $10B having been stolen.

People who work in policy now stand to have their reputation harmed by a connection to “EA”. This now means that EA movement growth can make policmakers more likely to be tied to EA, harming their career prospects.

From a more personal perspective, 200-800 EAs lost big on FTX: their jobs, savings, or sheer embarrassment (having recommended grants rescinded, etc.).

It may be more difficult, as there are new ways that promoting EA can go badly. For example, the conversation could go like this:

Booster: “have you heard about EA?”
Recruit: “that thing that donated $1B then FTX defrauded 10B? The fifth largest fraud of all time?”
Booster: “...”

II. More background considerations about EA’s current trajectory.

5.

Despite the above, EA is not about to disappear entirely. For any reasonable strategy that EA leaders pursue, some of the most valuable aspects of EA will persist:

  • EA values. Some people care about EA very deeply. Religions can withstand persecution by totalitarian governments, and some feel just about as strongly about EA. So people will continue to hold EA values.

  • EA orgs. There are >30 orgs doing different EA things, with different leaders, inside and outside academia, in many nations, and supported by multiple funders. Most will survive.

  • The EA network. Many have been friends and colleagues for a decade or more. This adversity won’t tear the social graph apart.

6.

There has been a wide range of responses to FTX-related events. One common response is that people are less excited about EA.

  • Good people have become disillusioned, or burned out and are leaving EA. Note also the discontinuation of Lightcone for partly related reasons.

  • Personally, I still value EA goals, EA work, and have EA friends, but my enthusiasm for EA as a movement is gone.

  • Some argue that the people I hear from are not representative. But I think they differ if anything, by being older, and more independent-minded, and more academically successful. And if outreach is still thriving, but this is just because new recruits are young and naive about EA’s failings, then this is not reassuring. Besides, the bigger selection effect is that we are not hearing from many, because they have left EA.

7.

Another kind of response is for people to get defensive. Some of us would take a bunker-type mindset, trying to fortify EA against criticism. One minor example is that when I wrote in my bio, for an EA organiser retreat, that I am interested in comparing community-building vs non-CB roles, and in discussing activities other than movement-building, a worried organiser reminded me that the purpose of the retreat is to promote community-building. Another example is that many told me that their biggest concern about criticism of EA is that it demotivates people. Others reflected more deeply on EA, and I think that was the more appropriate response.

8.

Some people changed their views on EA less than I would like, especially some grantmakers and leaders. (I hesitate to point it out, but such folks do derive a lot of influence from their roles within EA, making it harder to update.)

  • Some have suggested that there was little wrong with the style of politics pursued using FTX money, apart from the fraud.

  • Some asked: if students seem to still be interested, then why is there a problem? One asked “aren’t you a utilitarian? Should you therefore place low weight on these things?”. Whereas I think that our seat of the pants impression of the severity of events is more reliable than e.g. surveys from selected samples of event attendees, and that we should treat recent scandals seriously.

9.

I think that CEA has struggled in some ways to adapt to the new situation.

  • Vacuous annual update. CEA’s annual updates said that CEA had a great year, because metrics are up, while EA at-large had its worst year ever. In order to know the value of different kinds of growth, including even the sign, one must know the strategic landscape, which was not discussed. EVF and its board have generally said little about the FTX issue.

  • Governance. EVF failed to announce its new CEO for months, and for almost a year, the FTXFF team still made up 40% of its UK board. It is thought 10%-likely at time of writing to be subject to discipline from the charity commission.

III. Thoughts on strategy

So far, I have largely just described EA’s strategic position. I will now outline what I think we should now do.

10.

Promoting “EA” to promote interest in x-risk now makes less sense. But I don’t think pivoting the longtermist EA scene toward being an “x-risk” community solves the problem; it might even make it worse.

a) Typically, people I talk to are interested EA movement-building as a route to reducing x-risks. It is a priori surprising that we should need to detour through moral philosophy and global health, but they raise three points in favour:

i. EA is an especially persuasive way to get people working on x-risk

ii. (AI) x-risk is too weird to directly persuade people about.

iii. People who work on existential risk for “the right reasons” (i.e. reasons related to EA), can be trusted more, to do a good job.

Due to the reputational collapse of EA, (i) is less plausible. (ii) is much less true than it once was, thanks to Superintelligence, the Precipice, and growing AI capabilities. It might make some sense to want someone who can pursue your goals, as in (iii). But on the other hand, interest in EA is now a less reliable signal of someone’s epistemics, and their trustworthiness in general, pushing in the other direction.

b) There is also somewhat less goodwill and trust between longtermist and non-longtermist EA, than there was a decade ago.

c) Even before the events of FTX, we knew that there was low-hanging fruit in field-building that we should pick—now we just need to give it more of our attention.

d) So, within a few years, I think most people doing EA work will be inclined to promote causes of interest—x-risk, global poverty, etc. in a direct fashion.

e) This is not without its challenges. We will need to work out new cause-specific talent pipelines. And many of the difficulties with EA outreach will remain in the cause-specific communities.

f) A central problem is that an AI safety community founded by EAs could—absent mitigating actions—be expected to have many of the same problems that afflicted EA. We could still expect that a subculture might form. If AI risk can literally cause human extinction, then it is not hard to justify the same kind of risky actions that RB can justify. Some illegal actions might even be more likely in an AI safety community, such as trying to harm AI researchers. So fundamentally, a switch to directly promoting AI/​x-risk reduction doesn’t solve the problems discussed in this document.

g) But starting a newly branded “AI safety” community might present an opportunity to reset our culture and practices.

11.

Apart from pivoting to “x-risk”, what else could we do? Would it help if…

a) …EA changed its name?

This would be deceptive, and wouldn’t solve the risky behaviour, or even the reputational issues in the long-term.

b) …we had a stronger community health team with a broad mandate for managing risks, rather than mostly social disputes and PR?

Maybe, but CH already had a broad mandate on paper. Given EVF’s current situation, it might be a tall task. And if VCs and accountancies didn’t see FTX’s problems, then a beefed-up CH might not either.

Maybe a CH team could do this better independently of CEA.

Alternatively, risk management could be decentralised by instantiating a stronger norm against risky projects: If Alice thinks some project is good but Bob says it’s harmful, we trust Bob more than we did before.

c) … we shape the ideology to steer clearer of RB & naive consequentialism?

Yes, this was already attempted, but we could do more. OTOH, it could be that beneficentrists are naturally (and intractably) include some psychopaths and other untrustworthy figures. If so, this could undermine the central strategy of recruiting beneficentrists to a community. But to the extent that people are committed to growing this beneficentrist community, such mitigation strategies are worth trying.

d) … we selected against people with dark triad traits, and in cases where we did recruit such people, we identified and distanced ourselves from them.

Absolutely

e) … EA was more professionalised, and less community-like?

Yes. People do risky things when they feel it’s them vs the world. And we trusted Sam too much because he was “one of us”.

f) … we had better governance

Yes, there was little oversight of FTX, which enabled various problems.

g) … EAs otherwise become less bound to one another’s reputations

It doesn’t obviate the harm from risky behaviour, but ideally yes, and this could also be achieved by being less community-like.

We could focus less on visible displays of do-gooding, to reduce the false impression that we are supposed to be a community of moral saints.

These are just my initial thoughts, obviously on all fronts, further investigation is needed.

12.

What should happen overall?

a) I think most EAs are not yet ready to let go of the existence of an EA movement. Nor is it clear that we could wind down effectively if we tried (without it getting co-opted). And things might look different in a year. So I don’t think it makes sense to try to wind down EA completely.

b) Still, it’s hard to see how tweaking EA can lead to a product that we should be excited about growing. Especially considering that we have the excellent option of just talking directly about the issues that matter to us, and doing field-building around those ideas—AI safety, Global Priorities Research, and so on. This would be a relatively clean slate, allowing us to do more (as outlined in 11), to discourage RB, and stop bad actors.

c) In this picture, EA would grow more slowly or shrink for a while, and maybe ultimately be overtaken by cause-specific communities.

Thanks to a dozen or so readers for their thoughtful comments and suggestions.