FTX, ‘EA Principles’, and ‘The (Longtermist) EA Community’

1. Intro

Two weeks ago, I was in the process of writing a different essay. Instead, I’ll touch on FTX, and make the following claims.

  1. I think ‘the principles of EA’ are, at the community level, indeterminate in important ways. This makes me feel uncertain about the degree to which we can legitimately make statements of the form: “SBF violated EA principles”.

  2. The longtermist community — despite not having a set of explicit, widely agreed upon, and determinate set of deontic norms[1] — nevertheless contains a distinctive set of more implicit norms, which I believe are worth preserving at the community level. I thus suggest an alternative self-conception for the longtermist community, centered on striving towards a certain set of moral-cum-epistemic virtues.

Section 2 discusses the first claim, and Section 3 discusses the second. Each chunk can probably be read independently, though I’d like it if you read them both.

2. ‘EA Principles’

This section will criticize some of the comments in Will’s tweet thread, published in the aftermath of FTX’s collapse.

I want to say that, while I’ll criticize some of Will’s remarks, I recognize that expressing yourself well under conditions of emotional stress is really, really hard. Despite this difficulty, I imagine that Will nevertheless felt he had to say something, and quickly. So, while I stand behind my criticism, I hope that my criticism can be viewed as an attempt to live up to ideals I think Will and I both share — of frank intellectual honesty, in service of a better world.

2.1.

From Will’s response:

“If those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community.” (emphasis mine)

Overall, I’m not convinced. In his tweet thread, Will cites various sources — one of which is Holden’s post on the dangers of maximization, in which Holden makes the following claim:

“I think “do the most good possible” is an … important idea … but it’s also a perilous idea if taken too far … Fortunately, I think EA mostly resists [the perils of maximization] – but that’s due to the good judgment and general anti-radicalism of the human beings involved, not because the ideas/​themes/​memes themselves offer enough guidance on how to avoid the pitfalls.”

According to Holden, one of EA’s “core ideas” is a concern with maximization. And he thinks that the primary way in which EA avoids the pitfalls of their code ideas is through being tempered by moderating forces external to the core ideas themselves. If we weren’t tempered by moderating forces, Holden claims that:

We’d have a community full of low-integrity people, and “bad people” as most people define it.

Here’s one (to me natural) reading of Holden’s post, in light of the FTX debacle. SBF was a risk-neutral Benthamite, who describes his own motivations in founding FTX as the result of a risky, but positive expected value bet done in service of the greater good. And, indeed, there are other examples of Sam being really quite unusually committed to this risk-neutral, Benthamite way of approaching decisions. In light of this, one may think that Sam’s decision to deceive and commit fraud may well have been more in keeping with an attempt to meet the core EA idea of explicit maximization, even if his attempt was poorly executed. On this reading, Sam’s fault may not have consisted in abandoning the principles of the EA community. Instead, his failings may have arisen from the absence of normal moderating forces, which are external to EA ideas themselves.

Recall Will’s statement: he claimed that, conditional on Sam committing fraud, Sam “entirely abandoned the principles of the effective altruism community”. I think this statement gets at something, because most individual EAs certainly did condemn Sam’s actions. But I think that the evidence garnered from our community’s reaction fails to constitute straightforward support for Will’s claims. After all, people may be condemning Sam from the perspective of felt sentiments external to EA principles, rather than ideas that come directly from EA principles themselves.

2.2.

In What We Owe The Future, Will highlights the importance of deontological side-constraints, for both practical and instrumental reasons. In The Precipice, Ord highlights the importance of integrity. CEA extol the importance of honesty and trust. Given all this, it might seem natural to maintain that Sam violated EA principles, by unduly focusing on one narrow feature of EA — the underrated benefits of explicit maximization — while neglecting other principles, which are also core to EA.

And, well, maybe. I can certainly imagine being convinced of this. But I think it’s a hard question, in part because EA contains far more explicit utilitarians than any other social group I’m aware of, alongside many people who endorse deontological theories which, given large enough stakes (as many longtermists suppose we face) start to look very utilitarian. And at least some of us believe, along with Holden, that it’s “extremely uncertain and debatable what utilitarianism says about a given decision, especially from a longtermist point of view”. Maybe Holden is wrong, and utilitarian-ish theories (including utilitarianism itself) actually do provide clear verdicts on the practical wrongness of violating side-constraints.

Still, I bring up Holden’s comments because I think my intuition says something like: “to claim that someone violates the principles of some community, that action has to both: (1) clearly violate said principles, and (2) for such principles to be near-unanimously agreed upon”.

2.3.

Here’s an example to motivate my intuition: Suppose you’re entering your garden, on the way back home from school, just in time for dinner with your upper class family in Victorian England. As always, you see observe the motto inscribed beneath your family crest: “act with virtue”.

You walk inside. Dad’s shouting. He’s furious, after learning that you’ve been supporting the Suffragettes. “You’ve failed to live up to the family standards”, he tells you. You object, pointing out the fierceness of your late grandma, and what she would’ve wanted. Your Dad says she doesn’t get a vote; in any case, he points out that all your current family agree with him. He points out that various members of the family forum have really quite publicly said that women should be modest, and attempting to storm parliament certainly isn’t modest. You protest, again, that many in your lineage are advocates of equal rights — and equal treatment is a virtue, or so you claim.

You argue for hours, until you’re expelled from the house, and told to never return. As you leave, your father tells you that you’ve disgraced the family principles. Was he right? Who was actually living up to the family motto?

Here, I’m tempted to say that the motto is just too vague to license a determinate answer. The family members disagree with your actions, but you really do think that the Suffragettes are virtuous. And how is one meant to adjudicate this dispute? How does one act with virtue? I mean, it’s a vague term! Many people disagree on what it implies! To the extent that your family does unilaterally disagree with your actions, I’d take that as a sign that your ‘Family Principles’ were previously too vague, not that you determinately violated them.

Back to EA: consequentialism is a majority view within EA, and its implications with respect to respecting common sense moral norms appear at least controversial, especially from a longtermist point of view. Sam was a known, committed consequentialist, who may have been attempting to make decisions in an explicitly consequentialist way. Thus, claims to the effect of ‘Sam’s actions violated EA principles’ feel too strong. Sam’s actions were obviously not required by EA principles, but nor am I confident that, at least before this post, we’d have had firm ground to say that fraud was condemned by EA principles.

2.3.

As you leave your house, you decided to read the family motto one last time. But, huh, you notice that it’s different now. It reads:

Effective altruism is:

  1. The use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and

  2. The use of the findings from (1) to try to improve the world

You remember that you’re not a Victorian child, after all. You wake up. You realize that you fell asleep while reading Will MacAskill’s ‘The Definition of Effective Altruism’.

I think reflecting on the definition of EA given by Will further highlights the difficulties involved in interpreting claims about fraud “abandoning the principles of EA”. Note that the definition is, as Will points out, non-normative. It doesn’t explicitly tell you what you ought to do. Moreover, EA contains a large community of people who endorse utilitarianism, or other unusual theories, which many people seem to think deliver unclear verdicts for action. Also Sam really did seem, at least in part, to be trying to maximize his wealth in order to maximize his impact.

So, did Sam violate EA principles? In light of this discussion, I think the right answer, as with the Suffragette example, is: it’s indeterminate.

2.4.

I think it’s correct to say ‘it’s indeterminate’, rather than ‘no’, because our agreed upon standards for what counts as ‘careful reasoning’ in the domain of practical action clearly provide some precedent for denouncing things like fraud. We’ve already mentioned Will, CEA, and Toby’s statements to this effect. Joining the crowd, we have Stefan and Lucius, who have previously listed honesty (among others) as an important virtue for real-world utilitarians, and Richard Yetter-Chappell, a utilitarian and prominent member of EA, who has long maintained that utilitarians should be “honest, compassionate, loyal, trustworthy, averse to harming others …. in other words, [we should] be virtuous rather than scheming”. (This is not exhaustive, of course, they’re just posts that I remember)

So: many key figures in EA have long said (or implied) that performing explicit consequentialist calculations when considering a given action is sometimes incompatible with actually reasoning ‘carefully’, because such reasoning, for whatever reason, is unlikely to be a fruitful way of improving the world in the way you’d like.

Still, I think our principles are vague enough that we can’t straightforwardly point to fraud, in Sam’s case, and say that it violated EA principles. Even if Sam’s actions violated certain deontic side-constraints, it’s sometimes pointed out (correctly!) that various deontological theories start to look very consequentialist as the stakes get higher. Also, given Sam’s financial influence, he may well have believed that side-constraint violations are “almost always wrong”, but, given his legitimately unusual position, that he existed in one of the few cases where side-constraint violations really would be for the greater (expected) good. Of course, I don’t actually know what Sam was thinking. But, given that I don’t know, I don’t feel confident saying that his decision procedure, whatever it may have been, was out of keeping with EA principles.

2.5.

In his post-FTX tweet thread, Will also called for EAs to ‘not see ourselves as above common sense moral norms’. Here, I read Will as attempting to take a stand on what the EA movement ought to stand for, and I share that interest. But I feel hesitant about championing the importance of ‘common sense morality’, for reasons alluded to by Tyler Cowen.

“Grandma, in her attachment to common sense morality, is not telling you to fly to Africa to save the starving children (though you should finish everything on your plate). Nor would she sign off on Singer (1972).”

In a month’s time, I’ll visit my family for Christmas, and (as per usual) I’ll insist on eating only vegan food. Now, I might be wrong to do this. But telling me that my insistence violates “common sense morality” is not enough (nor do I think it should be enough) to convince me. “Common sense morality” is a nebulous concept, some parts which are in obvious conflicts with virtues I believe to be important. I think this is true for most of us. In many ways, we strive to be (in Will’s words) moral weirdos, and moral entrepreneurs.

Hence, I’m cautious about statements emphasizing the importance of common sense morality for two reasons. Firstly, because of the vagueness. Also, secondly, because I don’t think statements generically highlighting the importance of common sense morality are in tension with other features of the community’s self-conception — at least in the absence of further work outlining what, exactly, we’re deferring to when we defer to ‘common sense morality’.

Indeed, I can easily see how someone might look at public claims about the importance that EA ought to place on ‘common sense morality’ and come to believe — somewhat justifiably, even if incorrectly — that such statements amount to little more than PR. If we are to champion certain components of common sense morality, then I think we need to be explicit about what those features are, and how our commitment to those features fit neatly and coherently with the rest of our self-conception.

3. Virtues and ‘The Longtermist Community’

(Note: The following discussion is more specific to longtermist EA, rather than EA generally)

If the EA community’s principles are as indeterminate as I am claiming, one might start to feel sympathy with David Manheim’s recent post. David argues that we should move away from the idea of EA as a community. I reject this claim. Or, at least, I think that there ought to be a longtermist community. (I’m making this restricted claim, because it’s the side of EA which has taken up most of my recent involvement).

I think the longtermist community — viewed sociologically, as a community of people striving to embody certain moral and epistemic norms — has something going for it. And I think something would be lost if the community fragmented, leaving only individual groups working in specific cause areas. The longtermist community is united, I think, by its commitment to an unusual set of norms — norms that, despite being in many ways implicit, govern how we approach moral and practical reasoning. These norms help to form a shared set of (again, partially implicit) guidelines, and provide thinking tools used to inform people’s decisions to silo into different focus areas.

The longtermist community has value, I think, in virtue of the unusual norms which govern the way in which its members (aspire to) approach practical and moral reasoning. I think these norms are legitimately unusual, in a way that justifies having a dedicated longtermist community. However, I feel as though we lack an accurate and explicit self-conception, detailing the ways in which we depart from common sense. So I’ll suggest that we adopt a new self-conception — a conception partially preempted by Tyler Cowen, again commenting on EA in the aftermath of FTX:

“I … anticipate a boring short-run trend, where most of the EA people scurry to signal their personal association with virtue ethics.”

And, well, there are boring ways of signaling your association with virtue ethics. So Tyler’s got me in one sense, because I do want to claim that the longtermist community should primarily conceive of itself as a movement centered on striving towards specific practical virtues.[2] However, I don’t want the longtermist movement to perfunctorily champion the importance of alternative ethical theories, while retaining an explicit conception of itself, at the movement level, in terms of a theory of the good. Instead, I want to point out the ways in which a virtue-based conception of the longtermist community — of the kind that includes both moral and epistemic virtues — provides a more faithful, and potentially less harmful conception of what the longtermist community actually stands behind.

3.1.

When you heard of the fraud committed by FTX, were you angry?

If you were, then how much of this anger occurred after you had carefully calculated the expected costs and benefits of Sam’s actions, given his information at the time?

This isn’t a cheap jab at consequentialism. Consequentialists, of course, have stories about the illegitimacy of fraud; and, in any case, I’m not using the rhetorical questions above to motivate a discussion of whether, ultimately, at the level of moral theory, we want to justify our dislike of fraud on consequentialist grounds. Instead, I just want to note that, at a more proximate level, an explicit consequentialist calculation was likely not the direct cause of any outrage. Instead, we valued certain virtues, whether explicated or not, and whether ultimately grounded in consequentialism, or not.

I think many of us value virtues like integrity and honesty. And, while the appeal of those virtues are hardly unique to the longtermist community, I believe that we also see appeal in other, more unconventional virtues. One of our virtues concerns taking the potential scale of value seriously. We can call this the virtue of scope-sensitivity. We think that some outcomes can be a lot better than others, and strive to recognize the scale of value in our practical decisions. This virtue is one, among many, to which we aspire.

We also strive towards the virtue of impartiality. Or, if not complete impartiality, then we see virtue in the act of viewing oneself as a member of the broad community of sentient beings. We may still wish to be partial to our family, and loved ones, who form our closer community. But we hold in our minds a broader community still, encompassing every creature capable of having interests. We strive towards this virtue because we recognize that others — whoever, wherever, and whenever they are — have interests no less important than our own, or the interests of those more proximate.

Finally, as Richard Ngo points out, we strive to take responsibility for making the world better. We don’t just diagnose problems, but we consider it our duty to do something about them. And, in owning up to our responsibility to tackle hard problems, we see the virtue in being modest enough to own up to the ways in which we need to improve, and skill up, as well as the virtue in being immodest enough to believe that we actually can improve ourselves in relevant ways.

3.2.

Our sense of virtue contains epistemic components, too. We value looking at the many things we care about, and recognizing that we may face tradeoffs between our sacred values. And we strive, I hope, to face such tradeoffs not by downgrading the sacredness of our values, but by solemnly recognizing that the world constrains us in certain ways.

I think we aspire to the virtue of really believing what one says. We try not to treat beliefs as attire, but to recognize that our claims may commit us to other principles, or unusual courses of action, for which we were previously unaware. We take practical reasoning seriously, and consequently make use of more precise vocabulary to distinguish between our epistemic states. We talk of “immediate impressions”, and “all-things-considered judgments”, and aim to actually treat such distinctions as relevant when deciding between actions. Of course, we don’t always live up to this. But I think it’s fair to say that longtermists (and I think EAs generally) aspire to do this. We treat certain habits or dispositions as virtuous, and aim to move towards these ideals.

(I expect that some may read these paragraphs as objectionably self-congratulatory. But, honestly, I endorse them. EA is not the only community with notable virtues, but I think we do have notable virtues. Remember, none of us have to be here! We could read other stuff, get other jobs, and hang out with different people! I’ve chosen to be here because, through engaging with the community, I’ve felt inspired to improve along axes that I think make me a better person).

So, look, I don’t want us to “boringly signal our association with virtue ethics”. Instead, I want to point out that the real virtue ethics was in us all along.

Okay, well, I don’t want to say exactly that, but maybe something sort of close. I want us to recognize that, sociologically, we’re united by our recognition of certain virtues — virtues that we believe to be neglected and important. And I want to claim that: insofar as longtermist EA forms a community, I think that community best viewed as one striving towards an unusual (and partially implicit) conception of moral and epistemic virtue. That feature of our community is, I believe, both unusual and worth preserving.

3.3.

There’s a deontic norm which I think has community precedent, and ought to be part of our explicit self-conception. I’ll first baptize the norm, before expanding on its content — it’s the norm of Practical Kantianism; or (if you prefer) impartially, context-specified universalizability.

In order to clarify what I’m actually suggesting, we’ll refer to a comment from everyone’s favorite Kantian — Rob Wiblin. And, while Rob’s oeuvre contains many strident defenses of Kantianism,[3] I’ll limit myself to just a single quote. In this podcast discussion with Will, we find ourselves on the topic of how to view the expected value of contributing to some collective project (like, say, a protest), where the whole protest has positive expected value, even though it’s unclear whether your marginal contribution to the protest has positive expected value.

Rob: We think it’s actually worth thinking at a more group level, where you think: given the full cost of a project, given all of the people who might have to participate in it for it to reach a reasonable scale, and given the probability of that project as a whole, with all of those inputs succeeding, is it worth it in aggregate? And then if it is, then it’s probably worth it for each of the individual contributors to participate in it.

Will: Exactly.

Rob: And that’s a much more natural way of evaluating whether something is worthwhile than thinking about whether it’s worth you going in for one individual day more to work on the project. It’s too granular.

(Earlier in the podcast, Will echoes a similar thought, noting that “often the right way to think [about collective action] is through viewing yourself primarily as a “member of the community that you’re a part of that is taking action”).

In practice, I believe that many in the longtermist community would actually endorse something like Practical Kantianism. Instead of considering what the marginal contribution I, as an individual can make on the margin, longtermists are more likely to make a Kantian move — they’re more likely to treat maxims [4] as the proper object of normative evaluation, rather than the action of a single, lone individual.[5] That is, longtermists are more likely, I think, to treat the action of some larger community as foundational, and then assess the value of individual actions in virtue of their contribution to the net-effect of that community’s actions.

Of course, if you’re doing Practical Kantianism, you have to carefully specify the context, and relevant community. No one endorses claims like: “I can’t go to the shops right now, because if everyone did that, all the surgeons would be off-duty, and people would die!”. You specify the context (including relevant contextual facts about your antecedent responsibilities and commitments), and then treat the maxim as your object of evaluation, rather than the lone act of an individual agent. It’s important to specify the context impartially, too. We don’t want to allow claims like “Violet Hour can lie for the greater good, because lying, from my perspective, has better expected consequences!”.

Now, admittedly, I’ve cited just a single example, in one off-the-cuff podcast comment. But Rob here is espousing a general principle, rather than an isolated reaction to a particular problem case. Also, Rob is the Research Director at 80,000 Hours, opining on questions of how to view the rationality of aggregate, community-level decisions. So, if Will is championing the wrongness of rights violations, Eliezer consistently endorses the virtues of practical deontology, and Rob is a well-known Kantian, then I think it’s fair to say that my norm has precedent, and could be worth centering more explicitly in our self-conception.

3.4.

I recognize that my suggestion of an alternative community self-conception is somewhat vague, and raises various questions. Questions like:

“Which virtues ought we to adopt? How do we just decide upon a ~community self-conception~ , and who could actually implement what you say? Also, what the hell is Practical Kantianism anyway? Oh, and can’t we just frame the Kantian stuff in terms of updateless decision theory?”

All good questions, all good questions, and I’m not sure my answers will be all that satisfying. To answer briefly: I don’t have a precise sense of our implicit virtues, though I’m thinking more about it. Secondly, implementation is hard, and would probably have to arise from highly visible EA organizations — like CEA, or 80,000 Hours, and others I’m probably unfairly leaving out — putting out public statements, and emphasizing such features in community building. (I’ll save the final two questions for another time).

4. Fin

I am suggesting a change in the longtermist community’s self-conception, but I do not think that we need to do a PR-focused ‘rebrand’. Instead, I am suggesting this alternative self-conception for the following reasons:

(1) I have an antecedent belief that the longtermist community lacks an accurate and explicit self-conception.

(2) I believe that the aftermath of FTX presents an opportunity for us to collectively explicate and reevaluate some of our core commitments.

(3) I hold hope that an alternative community self-conception more explicitly focused on practical, on-the-ground norms we wish to encourage would help mitigate the chance of something like the FTX situation happening in the future.

So, to conclude, maybe longtermism is not centrally about maximization, or at least shouldn’t be. Instead, the longtermist community should aim to be a community of time-impartial Kantians, striving to embody a series of neglected virtues.

  1. ^

    That is, our norms about what constitutes right action.

  2. ^

    This is irrespective of the ultimate moral theory in which you may, ultimately, wish to ground such virtue talk — or, indeed, if you object to having a ‘theory’ of the good at all.

  3. ^

    This is a joke.

  4. ^

    A maxim is a claim, broadly speaking, of the form: ‘do action A in context C in order to achieve ends E’.

  5. ^

    I’m already going a bit rogue here, so I might as well speculate that endorsement of the Kantian may reflect one (among many) deeper differences between longtermist and neartermist EAs. Neartermist EA is primarily focused (as I understand it) on the marginal contribution of one individual, whereas longtermists are more likely to be sympathetic to the claim I made in the main text, where the actions of some larger community are treated as the foundational object of evaluation.