FTX, ‘EA Principles’, and ‘The (Longtermist) EA Community’
1. Intro
Two weeks ago, I was in the process of writing a different essay. Instead, I’ll touch on FTX, and make the following claims.
I think ‘the principles of EA’ are, at the community level, indeterminate in important ways. This makes me feel uncertain about the degree to which we can legitimately make statements of the form: “SBF violated EA principles”.
The longtermist community — despite not having a set of explicit, widely agreed upon, and determinate set of deontic norms[1] — nevertheless contains a distinctive set of more implicit norms, which I believe are worth preserving at the community level. I thus suggest an alternative self-conception for the longtermist community, centered on striving towards a certain set of moral-cum-epistemic virtues.
Section 2 discusses the first claim, and Section 3 discusses the second. Each chunk can probably be read independently, though I’d like it if you read them both.
2. ‘EA Principles’
This section will criticize some of the comments in Will’s tweet thread, published in the aftermath of FTX’s collapse.
I want to say that, while I’ll criticize some of Will’s remarks, I recognize that expressing yourself well under conditions of emotional stress is really, really hard. Despite this difficulty, I imagine that Will nevertheless felt he had to say something, and quickly. So, while I stand behind my criticism, I hope that my criticism can be viewed as an attempt to live up to ideals I think Will and I both share — of frank intellectual honesty, in service of a better world.
2.1.
From Will’s response:
“If those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community.” (emphasis mine)
Overall, I’m not convinced. In his tweet thread, Will cites various sources — one of which is Holden’s post on the dangers of maximization, in which Holden makes the following claim:
“I think “do the most good possible” is an … important idea … but it’s also a perilous idea if taken too far … Fortunately, I think EA mostly resists [the perils of maximization] – but that’s due to the good judgment and general anti-radicalism of the human beings involved, not because the ideas/themes/memes themselves offer enough guidance on how to avoid the pitfalls.”
According to Holden, one of EA’s “core ideas” is a concern with maximization. And he thinks that the primary way in which EA avoids the pitfalls of their code ideas is through being tempered by moderating forces external to the core ideas themselves. If we weren’t tempered by moderating forces, Holden claims that:
We’d have a community full of low-integrity people, and “bad people” as most people define it.
Here’s one (to me natural) reading of Holden’s post, in light of the FTX debacle. SBF was a risk-neutral Benthamite, who describes his own motivations in founding FTX as the result of a risky, but positive expected value bet done in service of the greater good. And, indeed, there are other examples of Sam being really quite unusually committed to this risk-neutral, Benthamite way of approaching decisions. In light of this, one may think that Sam’s decision to deceive and commit fraud may well have been more in keeping with an attempt to meet the core EA idea of explicit maximization, even if his attempt was poorly executed. On this reading, Sam’s fault may not have consisted in abandoning the principles of the EA community. Instead, his failings may have arisen from the absence of normal moderating forces, which are external to EA ideas themselves.
Recall Will’s statement: he claimed that, conditional on Sam committing fraud, Sam “entirely abandoned the principles of the effective altruism community”. I think this statement gets at something, because most individual EAs certainly did condemn Sam’s actions. But I think that the evidence garnered from our community’s reaction fails to constitute straightforward support for Will’s claims. After all, people may be condemning Sam from the perspective of felt sentiments external to EA principles, rather than ideas that come directly from EA principles themselves.
2.2.
In What We Owe The Future, Will highlights the importance of deontological side-constraints, for both practical and instrumental reasons. In The Precipice, Ord highlights the importance of integrity. CEA extol the importance of honesty and trust. Given all this, it might seem natural to maintain that Sam violated EA principles, by unduly focusing on one narrow feature of EA — the underrated benefits of explicit maximization — while neglecting other principles, which are also core to EA.
And, well, maybe. I can certainly imagine being convinced of this. But I think it’s a hard question, in part because EA contains far more explicit utilitarians than any other social group I’m aware of, alongside many people who endorse deontological theories which, given large enough stakes (as many longtermists suppose we face) start to look very utilitarian. And at least some of us believe, along with Holden, that it’s “extremely uncertain and debatable what utilitarianism says about a given decision, especially from a longtermist point of view”. Maybe Holden is wrong, and utilitarian-ish theories (including utilitarianism itself) actually do provide clear verdicts on the practical wrongness of violating side-constraints.
Still, I bring up Holden’s comments because I think my intuition says something like: “to claim that someone violates the principles of some community, that action has to both: (1) clearly violate said principles, and (2) for such principles to be near-unanimously agreed upon”.
2.3.
Here’s an example to motivate my intuition: Suppose you’re entering your garden, on the way back home from school, just in time for dinner with your upper class family in Victorian England. As always, you see observe the motto inscribed beneath your family crest: “act with virtue”.
You walk inside. Dad’s shouting. He’s furious, after learning that you’ve been supporting the Suffragettes. “You’ve failed to live up to the family standards”, he tells you. You object, pointing out the fierceness of your late grandma, and what she would’ve wanted. Your Dad says she doesn’t get a vote; in any case, he points out that all your current family agree with him. He points out that various members of the family forum have really quite publicly said that women should be modest, and attempting to storm parliament certainly isn’t modest. You protest, again, that many in your lineage are advocates of equal rights — and equal treatment is a virtue, or so you claim.
You argue for hours, until you’re expelled from the house, and told to never return. As you leave, your father tells you that you’ve disgraced the family principles. Was he right? Who was actually living up to the family motto?
Here, I’m tempted to say that the motto is just too vague to license a determinate answer. The family members disagree with your actions, but you really do think that the Suffragettes are virtuous. And how is one meant to adjudicate this dispute? How does one act with virtue? I mean, it’s a vague term! Many people disagree on what it implies! To the extent that your family does unilaterally disagree with your actions, I’d take that as a sign that your ‘Family Principles’ were previously too vague, not that you determinately violated them.
Back to EA: consequentialism is a majority view within EA, and its implications with respect to respecting common sense moral norms appear at least controversial, especially from a longtermist point of view. Sam was a known, committed consequentialist, who may have been attempting to make decisions in an explicitly consequentialist way. Thus, claims to the effect of ‘Sam’s actions violated EA principles’ feel too strong. Sam’s actions were obviously not required by EA principles, but nor am I confident that, at least before this post, we’d have had firm ground to say that fraud was condemned by EA principles.
2.3.
As you leave your house, you decided to read the family motto one last time. But, huh, you notice that it’s different now. It reads:
Effective altruism is:
The use of evidence and careful reasoning to work out how to maximize the good with a given unit of resources, tentatively understanding ‘the good’ in impartial welfarist terms, and
The use of the findings from (1) to try to improve the world
You remember that you’re not a Victorian child, after all. You wake up. You realize that you fell asleep while reading Will MacAskill’s ‘The Definition of Effective Altruism’.
I think reflecting on the definition of EA given by Will further highlights the difficulties involved in interpreting claims about fraud “abandoning the principles of EA”. Note that the definition is, as Will points out, non-normative. It doesn’t explicitly tell you what you ought to do. Moreover, EA contains a large community of people who endorse utilitarianism, or other unusual theories, which many people seem to think deliver unclear verdicts for action. Also Sam really did seem, at least in part, to be trying to maximize his wealth in order to maximize his impact.
So, did Sam violate EA principles? In light of this discussion, I think the right answer, as with the Suffragette example, is: it’s indeterminate.
2.4.
I think it’s correct to say ‘it’s indeterminate’, rather than ‘no’, because our agreed upon standards for what counts as ‘careful reasoning’ in the domain of practical action clearly provide some precedent for denouncing things like fraud. We’ve already mentioned Will, CEA, and Toby’s statements to this effect. Joining the crowd, we have Stefan and Lucius, who have previously listed honesty (among others) as an important virtue for real-world utilitarians, and Richard Yetter-Chappell, a utilitarian and prominent member of EA, who has long maintained that utilitarians should be “honest, compassionate, loyal, trustworthy, averse to harming others …. in other words, [we should] be virtuous rather than scheming”. (This is not exhaustive, of course, they’re just posts that I remember)
So: many key figures in EA have long said (or implied) that performing explicit consequentialist calculations when considering a given action is sometimes incompatible with actually reasoning ‘carefully’, because such reasoning, for whatever reason, is unlikely to be a fruitful way of improving the world in the way you’d like.
Still, I think our principles are vague enough that we can’t straightforwardly point to fraud, in Sam’s case, and say that it violated EA principles. Even if Sam’s actions violated certain deontic side-constraints, it’s sometimes pointed out (correctly!) that various deontological theories start to look very consequentialist as the stakes get higher. Also, given Sam’s financial influence, he may well have believed that side-constraint violations are “almost always wrong”, but, given his legitimately unusual position, that he existed in one of the few cases where side-constraint violations really would be for the greater (expected) good. Of course, I don’t actually know what Sam was thinking. But, given that I don’t know, I don’t feel confident saying that his decision procedure, whatever it may have been, was out of keeping with EA principles.
2.5.
In his post-FTX tweet thread, Will also called for EAs to ‘not see ourselves as above common sense moral norms’. Here, I read Will as attempting to take a stand on what the EA movement ought to stand for, and I share that interest. But I feel hesitant about championing the importance of ‘common sense morality’, for reasons alluded to by Tyler Cowen.
“Grandma, in her attachment to common sense morality, is not telling you to fly to Africa to save the starving children (though you should finish everything on your plate). Nor would she sign off on Singer (1972).”
In a month’s time, I’ll visit my family for Christmas, and (as per usual) I’ll insist on eating only vegan food. Now, I might be wrong to do this. But telling me that my insistence violates “common sense morality” is not enough (nor do I think it should be enough) to convince me. “Common sense morality” is a nebulous concept, some parts which are in obvious conflicts with virtues I believe to be important. I think this is true for most of us. In many ways, we strive to be (in Will’s words) moral weirdos, and moral entrepreneurs.
Hence, I’m cautious about statements emphasizing the importance of common sense morality for two reasons. Firstly, because of the vagueness. Also, secondly, because I don’t think statements generically highlighting the importance of common sense morality are in tension with other features of the community’s self-conception — at least in the absence of further work outlining what, exactly, we’re deferring to when we defer to ‘common sense morality’.
Indeed, I can easily see how someone might look at public claims about the importance that EA ought to place on ‘common sense morality’ and come to believe — somewhat justifiably, even if incorrectly — that such statements amount to little more than PR. If we are to champion certain components of common sense morality, then I think we need to be explicit about what those features are, and how our commitment to those features fit neatly and coherently with the rest of our self-conception.
3. Virtues and ‘The Longtermist Community’
(Note: The following discussion is more specific to longtermist EA, rather than EA generally)
If the EA community’s principles are as indeterminate as I am claiming, one might start to feel sympathy with David Manheim’s recent post. David argues that we should move away from the idea of EA as a community. I reject this claim. Or, at least, I think that there ought to be a longtermist community. (I’m making this restricted claim, because it’s the side of EA which has taken up most of my recent involvement).
I think the longtermist community — viewed sociologically, as a community of people striving to embody certain moral and epistemic norms — has something going for it. And I think something would be lost if the community fragmented, leaving only individual groups working in specific cause areas. The longtermist community is united, I think, by its commitment to an unusual set of norms — norms that, despite being in many ways implicit, govern how we approach moral and practical reasoning. These norms help to form a shared set of (again, partially implicit) guidelines, and provide thinking tools used to inform people’s decisions to silo into different focus areas.
The longtermist community has value, I think, in virtue of the unusual norms which govern the way in which its members (aspire to) approach practical and moral reasoning. I think these norms are legitimately unusual, in a way that justifies having a dedicated longtermist community. However, I feel as though we lack an accurate and explicit self-conception, detailing the ways in which we depart from common sense. So I’ll suggest that we adopt a new self-conception — a conception partially preempted by Tyler Cowen, again commenting on EA in the aftermath of FTX:
“I … anticipate a boring short-run trend, where most of the EA people scurry to signal their personal association with virtue ethics.”
And, well, there are boring ways of signaling your association with virtue ethics. So Tyler’s got me in one sense, because I do want to claim that the longtermist community should primarily conceive of itself as a movement centered on striving towards specific practical virtues.[2] However, I don’t want the longtermist movement to perfunctorily champion the importance of alternative ethical theories, while retaining an explicit conception of itself, at the movement level, in terms of a theory of the good. Instead, I want to point out the ways in which a virtue-based conception of the longtermist community — of the kind that includes both moral and epistemic virtues — provides a more faithful, and potentially less harmful conception of what the longtermist community actually stands behind.
3.1.
When you heard of the fraud committed by FTX, were you angry?
If you were, then how much of this anger occurred after you had carefully calculated the expected costs and benefits of Sam’s actions, given his information at the time?
This isn’t a cheap jab at consequentialism. Consequentialists, of course, have stories about the illegitimacy of fraud; and, in any case, I’m not using the rhetorical questions above to motivate a discussion of whether, ultimately, at the level of moral theory, we want to justify our dislike of fraud on consequentialist grounds. Instead, I just want to note that, at a more proximate level, an explicit consequentialist calculation was likely not the direct cause of any outrage. Instead, we valued certain virtues, whether explicated or not, and whether ultimately grounded in consequentialism, or not.
I think many of us value virtues like integrity and honesty. And, while the appeal of those virtues are hardly unique to the longtermist community, I believe that we also see appeal in other, more unconventional virtues. One of our virtues concerns taking the potential scale of value seriously. We can call this the virtue of scope-sensitivity. We think that some outcomes can be a lot better than others, and strive to recognize the scale of value in our practical decisions. This virtue is one, among many, to which we aspire.
We also strive towards the virtue of impartiality. Or, if not complete impartiality, then we see virtue in the act of viewing oneself as a member of the broad community of sentient beings. We may still wish to be partial to our family, and loved ones, who form our closer community. But we hold in our minds a broader community still, encompassing every creature capable of having interests. We strive towards this virtue because we recognize that others — whoever, wherever, and whenever they are — have interests no less important than our own, or the interests of those more proximate.
Finally, as Richard Ngo points out, we strive to take responsibility for making the world better. We don’t just diagnose problems, but we consider it our duty to do something about them. And, in owning up to our responsibility to tackle hard problems, we see the virtue in being modest enough to own up to the ways in which we need to improve, and skill up, as well as the virtue in being immodest enough to believe that we actually can improve ourselves in relevant ways.
3.2.
Our sense of virtue contains epistemic components, too. We value looking at the many things we care about, and recognizing that we may face tradeoffs between our sacred values. And we strive, I hope, to face such tradeoffs not by downgrading the sacredness of our values, but by solemnly recognizing that the world constrains us in certain ways.
I think we aspire to the virtue of really believing what one says. We try not to treat beliefs as attire, but to recognize that our claims may commit us to other principles, or unusual courses of action, for which we were previously unaware. We take practical reasoning seriously, and consequently make use of more precise vocabulary to distinguish between our epistemic states. We talk of “immediate impressions”, and “all-things-considered judgments”, and aim to actually treat such distinctions as relevant when deciding between actions. Of course, we don’t always live up to this. But I think it’s fair to say that longtermists (and I think EAs generally) aspire to do this. We treat certain habits or dispositions as virtuous, and aim to move towards these ideals.
(I expect that some may read these paragraphs as objectionably self-congratulatory. But, honestly, I endorse them. EA is not the only community with notable virtues, but I think we do have notable virtues. Remember, none of us have to be here! We could read other stuff, get other jobs, and hang out with different people! I’ve chosen to be here because, through engaging with the community, I’ve felt inspired to improve along axes that I think make me a better person).
So, look, I don’t want us to “boringly signal our association with virtue ethics”. Instead, I want to point out that the real virtue ethics was in us all along.
Okay, well, I don’t want to say exactly that, but maybe something sort of close. I want us to recognize that, sociologically, we’re united by our recognition of certain virtues — virtues that we believe to be neglected and important. And I want to claim that: insofar as longtermist EA forms a community, I think that community best viewed as one striving towards an unusual (and partially implicit) conception of moral and epistemic virtue. That feature of our community is, I believe, both unusual and worth preserving.
3.3.
There’s a deontic norm which I think has community precedent, and ought to be part of our explicit self-conception. I’ll first baptize the norm, before expanding on its content — it’s the norm of Practical Kantianism; or (if you prefer) impartially, context-specified universalizability.
In order to clarify what I’m actually suggesting, we’ll refer to a comment from everyone’s favorite Kantian — Rob Wiblin. And, while Rob’s oeuvre contains many strident defenses of Kantianism,[3] I’ll limit myself to just a single quote. In this podcast discussion with Will, we find ourselves on the topic of how to view the expected value of contributing to some collective project (like, say, a protest), where the whole protest has positive expected value, even though it’s unclear whether your marginal contribution to the protest has positive expected value.
Rob: We think it’s actually worth thinking at a more group level, where you think: given the full cost of a project, given all of the people who might have to participate in it for it to reach a reasonable scale, and given the probability of that project as a whole, with all of those inputs succeeding, is it worth it in aggregate? And then if it is, then it’s probably worth it for each of the individual contributors to participate in it.
Will: Exactly.
Rob: And that’s a much more natural way of evaluating whether something is worthwhile than thinking about whether it’s worth you going in for one individual day more to work on the project. It’s too granular.
(Earlier in the podcast, Will echoes a similar thought, noting that “often the right way to think [about collective action] is through viewing yourself primarily as a “member of the community that you’re a part of that is taking action”).
In practice, I believe that many in the longtermist community would actually endorse something like Practical Kantianism. Instead of considering what the marginal contribution I, as an individual can make on the margin, longtermists are more likely to make a Kantian move — they’re more likely to treat maxims [4] as the proper object of normative evaluation, rather than the action of a single, lone individual.[5] That is, longtermists are more likely, I think, to treat the action of some larger community as foundational, and then assess the value of individual actions in virtue of their contribution to the net-effect of that community’s actions.
Of course, if you’re doing Practical Kantianism, you have to carefully specify the context, and relevant community. No one endorses claims like: “I can’t go to the shops right now, because if everyone did that, all the surgeons would be off-duty, and people would die!”. You specify the context (including relevant contextual facts about your antecedent responsibilities and commitments), and then treat the maxim as your object of evaluation, rather than the lone act of an individual agent. It’s important to specify the context impartially, too. We don’t want to allow claims like “Violet Hour can lie for the greater good, because lying, from my perspective, has better expected consequences!”.
Now, admittedly, I’ve cited just a single example, in one off-the-cuff podcast comment. But Rob here is espousing a general principle, rather than an isolated reaction to a particular problem case. Also, Rob is the Research Director at 80,000 Hours, opining on questions of how to view the rationality of aggregate, community-level decisions. So, if Will is championing the wrongness of rights violations, Eliezer consistently endorses the virtues of practical deontology, and Rob is a well-known Kantian, then I think it’s fair to say that my norm has precedent, and could be worth centering more explicitly in our self-conception.
3.4.
I recognize that my suggestion of an alternative community self-conception is somewhat vague, and raises various questions. Questions like:
“Which virtues ought we to adopt? How do we just decide upon a ✨~community self-conception~ ✨, and who could actually implement what you say? Also, what the hell is Practical Kantianism anyway? Oh, and can’t we just frame the Kantian stuff in terms of updateless decision theory?”
All good questions, all good questions, and I’m not sure my answers will be all that satisfying. To answer briefly: I don’t have a precise sense of our implicit virtues, though I’m thinking more about it. Secondly, implementation is hard, and would probably have to arise from highly visible EA organizations — like CEA, or 80,000 Hours, and others I’m probably unfairly leaving out — putting out public statements, and emphasizing such features in community building. (I’ll save the final two questions for another time).
4. Fin
I am suggesting a change in the longtermist community’s self-conception, but I do not think that we need to do a PR-focused ‘rebrand’. Instead, I am suggesting this alternative self-conception for the following reasons:
(1) I have an antecedent belief that the longtermist community lacks an accurate and explicit self-conception.
(2) I believe that the aftermath of FTX presents an opportunity for us to collectively explicate and reevaluate some of our core commitments.
(3) I hold hope that an alternative community self-conception more explicitly focused on practical, on-the-ground norms we wish to encourage would help mitigate the chance of something like the FTX situation happening in the future.
So, to conclude, maybe longtermism is not centrally about maximization, or at least shouldn’t be. Instead, the longtermist community should aim to be a community of time-impartial Kantians, striving to embody a series of neglected virtues.
- ^
That is, our norms about what constitutes right action.
- ^
This is irrespective of the ultimate moral theory in which you may, ultimately, wish to ground such virtue talk — or, indeed, if you object to having a ‘theory’ of the good at all.
- ^
This is a joke.
- ^
A maxim is a claim, broadly speaking, of the form: ‘do action A in context C in order to achieve ends E’.
- ^
I’m already going a bit rogue here, so I might as well speculate that endorsement of the Kantian may reflect one (among many) deeper differences between longtermist and neartermist EAs. Neartermist EA is primarily focused (as I understand it) on the marginal contribution of one individual, whereas longtermists are more likely to be sympathetic to the claim I made in the main text, where the actions of some larger community are treated as the foundational object of evaluation.
No thoughts on part 3., but I thought that your thinking in parts 1. and 2. were clear. Making this comment because the conclusions you point to are inconvenient but I think correct, and in similar past situations I’ve found it helpful to get confirmation from other people.
Agreed. The style is slightly different than what I’m used to, but the arguments seem correct and forceful. In particular the arguments against taking common-sense morality too far seems a) correct and b) important to our community’s[1] self-conception, that we can’t just elide easily.
or at least mine
re: “Practical Kantianism”, can it avoid the standard failure modes of universal generalization—i.e., recommending harmful acts simply because other acts of the same kind had positive value?
Here’s an example:
fwiw, I think marginal value is more relevant than average value or anything else like it—it’s just that we can often (not always) take average group value to be our best guide to the marginal value of our contributions (if there’s no grounds for taking ourselves to be in an unrepresentative position).
For more on how consequentialists can deal with collective action/inefficacy worries, see my post, ‘Five Fallacies of Collective Harm’.
(Apologies if the self-linking is annoying. I think the linked posts are helpful and relevant, but obviously feel free to downvote if you disagree!)
(No problem with self-linking, I appreciate it!)
Also, I think there’s an adequate Kantian response to your example. Am I missing something?
So, I act on the basis of maxims, but changes in my epistemic state can still appropriately inform my decision-making.
Sounds reasonable! Though if you can build in all the details of your specific individual situation, and are directed to do what’s best in light of this, do you think this ends up being recognizably distinct from act consequentialism?
(Not that convergence is necessarily a problem. It can be a happy result that different theorists are “climbing the same mountain from different sides”, to borrow Parfit’s metaphor. But it would at least suggest that the Kantian spin is optional, and the basic view could be just as well characterized in act consequentialist terms.)
The short answer is: I think the norm delivers meaningfully different verdicts for certain ways of cashing out ‘act consequentialism’, but I imagine that you (and many other consequentialists) are going to want to say that the ‘Practical Kantian’ norm is compatible with act consequentialism. I’ll first discuss the practical question of deontic norms and EA’s self-conception, and then respond to the more philosophical question.
1.
If I’m right about your view, my suggested Kantian spin would (for you) be one way among many to talk about deontic norms, which could be phrased in more explicitly act-consequentialist language. That said, I still think there’s an argument for EA as a whole making deontic norms more central to its self-conception, as opposed to a conception where some underlying theory of the good is more central. EA is trying to intervene on people’s actions, after all, and your underlying theory of the good (at least in principle) underdetermines your norms for action. So, to me, it seems better to just directly highlight the deontic norms we think are valuable. EA is not a movement of moral theorists qua moral theorists, we’re a movement of people trying to do stuff that makes the world better. Even as a consequentialist, I guess that you’re only going to want involvement with a movement that shares broadly similar views to you about the action-relevant implications of consequentialism.
I want to say that I also think there should be clear public work outlining how the various deontic norms we endorse in EA clearly follow from consequentialist theories. Otherwise, I can see internal bad actors (or even just outsiders) thinking that statements about the importance of deontological norms are just about ‘brand management’, or whatever. I think it’s important to have a consistent story about the ways in which our deontic norms related to our more foundational principles, both so that outsiders don’t feel like they’re being misled about what EA is about, and so that we have really explicit grounds on which to condemn certain behaviors as legitimately and unambiguously violating norms that we care about.
(Also, independently, I’ve (e.g.) met many people in EA who seem to flit between ‘EUT is the right procedure for practical decision-making’ and ‘EUT is an underratedly useful tool’ — even aside from discussions of side-constraints, I don’t think we have a clear conception as to what our deontic norms are, and I think this would independently beneficial. For instance, I think it would be good to have a clearer account of the procedures that really drive our prioritization decisions).
2.
On a more philosophical level, I believe that various puzzle cases in decision theory help motivate the case for treating maxims as the appropriate evaluative focal point wrt rational decision-making, rather than acts. Here are some versions of act consequentialism that I think will diverge from the Practical Kantian norm:
Kant+CDT tells you to one-box in the standard Newcomb problem, whereas Consequentialism+CDT doesn’t.
Consequentialism+EDT is vulnerable XOR blackmail, whereas Kant+CDT isn’t.
Perhaps there is a satisfying decision theory which, combined with act-consequentialism, provides you with (what I believe to be) the right answers to decision-theoretic puzzle cases, though I’m currently not convinced. I think I might also disagree with you about the implications of collective action problems for consequentialism (though I agree that what you describe as “The Rounding to Zero Fallacy” and “The First-Increment Fallacy” are legitimate errors), but I’d want to think more about those arguments before saying anything more.
Yes, agreed that what matters for EA’s purposes is agreement on its most central practical norms, which should include norms of integrity, etc., and it’s fine to have different underlying theories of what ultimately justifies these. (+ also fine, of course, to have empirical/applied disagreements about what we should end up prioritizing, etc., as a result..)
I’ll look forward to hearing more of your thoughts on consequentialism & collective action problems at some future point!
In response to part 2: I personally didn’t think the principles of EA are vague.
From the introduction to EA on effectivealtruism.org, the fourth value of the movement is described as:
“Collaborative spirit: It’s possible to achieve more by working together, and doing this effectively requires high standards of honesty, friendliness, and a community perspective. Effective altruism is not about ‘ends justify the means’ reasoning, but rather is about being a good citizen, while ambitiously working toward a better world.”
https://www.effectivealtruism.org/articles/introduction-to-effective-altruism#:~:text=Collaborative spirit%3A,a better world.
It seems very clear to me that SBF’s actions violate this value.
Thanks for the comment, this is a useful source. I agree that SBF’s actions violated “high standards of honesty” (as well as, um, more lenient ones), and don’t seem like the actions of a good citizen.
Still, I’ll note that I still feel hesitant about claims like “Sam violated the principles of the EA community”, because your cited quote is not the only way that EA is defined. I agree that we can find accounts of EA under which Sam violated those principles. Relative to those criteria, it would be correct to say “Sam violated EA principles”. Thus, I think you and I both agree that saying things like “Sam acted in accordance with EA principles” would be wrong.
However, I have highlighted other accounts of “what EA is about”, under which I think it’s much harder to say that Sam straightforwardly violated those principles — accounts which place more emphasis on the core idea of maximization. And my intuitions about when it was appropriate to make claims of the form ‘Person X violated the principles of this community’ requires something close to unanimity, prior to X’s action, of what the core principles actually are, and what they commit to you. Due to varying accounts of what EA ‘is’, or ‘about’, I reject the claim that Sam violated EA principles for much the same reason that I reject claims like Sam acted in accordance with them. So, I still think I stand behind my indeterminacy claim.
I’m unsure where we disagree. Do you think you have more lenient standards than me for when we should talk about ‘violating norms’, or do you think that (some of?) the virtues listed in your quote are core EA principles, and close-to-unanimously agreed upon?
I’ll just note that all of the links in this thread predate the “fraud in the service of effective altruism is unacceptable” post, who were by people most would probably consider “EA leaders”.
I think I agree that “maximization” seems to be a core idea of EA. But I think I disagree that people think “what EA is about” will include maximization to the extent that Sam took it (let’s assume he actively + intentionally defrauded FTX customers for the purpose of donating it). And just because “maximization” seems to be directionally correct for most people (and thus seen to be “what EA is about”), doesn’t mean that all actions done in the name of “maximization” (assuming this is what happened) are consistent with EA principles.
I think I probably agree with your statement of EA community values being “indeterminate”. But I also think your bar for saying something is not indeterminant (requiring something close to unanimity) is too high-in that case you’re going to be hard pressed to find many things that fit this in the EA community (we should do good better), and even within the longtermist community (future people matter).
Great post! On the tension between “maximization” vs “common-sense”, it can be helpful to distinguish two aspects of utilitarianism that are highly psychologically separable:
(1) Acceptance of instrumental harm (i.e. rejection of deontic constraints against this); and
(2) Moral ambition / scope-sensitivity / beneficentrism / optimizing within the range of the permissible. (There may be subtle differences between these different characterizations, but they clearly form a common cluster.)
Both could be seen as contrasting with “common sense”. But I think EA as a project has only ever been about the second. And I don’t think there’s any essential connection between the two—no reason why a commitment to the second should imply the first.
As generously noted by the OP [though I would encourage anyone interested in my views here to read my recent posts instead of the old one from my undergraduate days!], I’ve long argued that utilitarianism is nonetheless compatible with:
(1*) Being guided by commonsense deontic constraints, on heuristic grounds, and distrusting explicit calculations to the contrary (unless it would clearly be best for most people similarly subjectively situated to trust such calculations).
fwiw, my sense is that this is very much the mainstream view in the utilitarian tradition. Strikingly, those who deny that utilitarianism implies this are, overwhelmingly, non-utilitarians. (Of course, there are possible cases where utilitarianism will clearly advise instrumental harm, but the same is true of common-sense deontology; absolutism is very much not commonsensical.)
So when folks like Will affirm the need for EA to be guided by “commonsense moral norms”, I take it they mean something like the specific disjunction of rejecting (1) or affirming (1*), rather than a wholehearted embrace of commonsense morality, including its lax rejection of (2). But yeah, it could be helpful to come up with a concise way of expressing this more precise idea, rather than just relying on contextual understanding to fill that in!
EDITS: I made substantial edits to the last section of this comment about 14 hours after posting.
Violet Hour, here are some thoughts on your interesting approach:
Maxims create tension, the same as between context and rules
social movements and ethics-minded communities do have maxims, usually visible in their slogans.
contextualization contrasts with universalizability.
unique contexts can test the universalizability of maxims.
common contexts usually suggest applicable maxims to follow.
context matters but so do rules (maxims), it’s a well-known tension.
Community standards can decline, encouraging self-serving rationalization
intersubjective verification can protect against self-serving rationalizations.
self-serving rationalizations include invalid contextualization and invalid maxim selection.
self-serving rationalization is in service of self-interest not others’ interest.
ethics usually conflict with self-interest, another well-known tension.
intersubjective verification fails when community standards decline.
community standards decline when no one cares about or everyone agrees with the unethical/immoral behavior.
Positive virtues do not prove their worth in how they help define effectiveness of actions taken to benefit others
positive virtues (e.g, forthrightness, discretion, integrity, loyalty) can conflict.
actual consequences, either believed or confirmed, are the final measure of an action’s benefit to others.
benefit to others is independent of intentions, expectations, luck and personal rewards involved.
benefit to others is not, per se, a measure of morality or ethicality of actions.
benefit to others must be measured somehow.
those measures have little to do with positive virtues.
Given a community intending to act ethically, there’s a list of problems that can occur:
rationalizations (for Kantians, invalid contextualization or invalid maxim selection)
conflicts with self-interest
community standards decline
conflict of positive virtues
dissatisfaction with positive virtues impact on efficiacy
In looking at these problems yourself, you pick and choose a path that deals with them. I think you are suggesting that:
“in the long run” some virtues support better outcomes for a community.
if those virtues support the unique altruistic interests of the community, adopt them community-wide.
treat those virtues as more important than, or independent of, marginal altruistic gains made by individuals.
As far as FTX issues, there’s a difference between:
describing events (what happened?)
interpreting events (what’s it mean?)
evaluating events (how do I feel about it?)
People use hindsight to manifest virtues, but protecting virtues requires foresight
evaluating events is where a lot of virtues manifest.
evaluating events happens in hindsight.
prioritizing a virtue requires foresight and proactive development of expectations.
virtues like honesty and integrity require EAs to create models of context.
EA’s may differ in how they model the contexts (and relevant behaviors) of billionaires.
maxims for deciding whether EA virtues are manifest in selecting a donor therefore have conflicting contextualizations within the community.
In the case of FTX, I believe that indifference to the source of earnings predisposed the community to ignore the behavior of FTX in acquiring those earnings. Not because that’s fair or moral or consistent, but because:
the crypto industry is notoriously unethical but poorly regulated and understood to be risky.
rational, well-informed folks interested in acquiring charitable contributions have reason to ignore their source.
big finance in general is well-tolerated by the community as a source of funds.
In other words, community standards with regard to donors and their fund-raising had already declined. Therefore, nothing was considered wrong with FTX providing funds. I don’t object to that decline, necessarily, if there was in fact some decline in the first place. I’ll note that silicon valley ethics take to risky businesses and crypto as net positive, treating their corruption and harm as negative externalities, not even worthy of regulation, given its costs. Yet crypto is the most obviously corrupt “big thing” around in big finance right now.
All this reveals a tension between:
calculations of expected value: narrow-context calculations with values taken from measures of benefit to others of EA activity
community virtue: wider-context rules guiding decisions about avoiding negative consequences of donor business activities.
In another post(being edited right now, I proposed a four-factor model about calculations of consequences, in terms of harm and help to others and harm and help of oneself, useful mainly for thought experiments. One relevant point to this discussion was that an action can cause both harm and help to others, although, actually, the whole thing seems relevant from where I sit.
How EA’s decide to maximize consequences (causing help but no harm, causing known help and unknown harm, causing known harm and unknown help, causing slightly more help than harm, etc), is a community choice.
The breakdown of community standards is a subtle problem, it’s sometimes a problem of interpretation, so I’m not sure what direction I can give about this myself. I would like to see:
what maxims from a practical Kantian model that you think really apply here, with their context developed in more detail
how you propose to model contexts, particularly given your faith in Bayesian probabilities for credences, and what I anticipate will be your reliance on expected value calculations.
I really don’t think any model of context and consequences dependent on Bayesian probabilities will fit with virtue ethics well at all. You’re welcome to prove me wrong.
Ultimately, if a community decides to be self-serving and cynical in its claims of ethical rigor (ie, to lie), there’s no approach to ethics that will save the community from its ethical failure. On the other hand, a community of individuals interested in virtue or altruism will struggle with all the problems I listed above (rationalizations, community standards decline, virtues in conflict, etc).