It is wholly unsurprising that public facing EAs are currently denying that ends justify means. Because they are in damage control mode. They are tying to tame the onslaught of negative PR that EA is now getting. So even if they thought that the ends did justify the means, they would probably lie about it. Because the ends (better PR) would justify the means (lying). So we cannot simply take these people at their word. Because whatever they truly believe, we should expect their answers to be the same.
Let’s think for ourselves, then. Would utilitarianism ever justify making high-stakes high-reward bets? Yes, of course. Could that be what SBF was doing? Quite possibly. Because a double-or-nothing coin-flip scales; it doesn’t stop having high EV when we start dealing with big bucks. So perhaps SBF was simply being a good utilitarian and did whatever had the highest value in expectation. Only this time he landed on the ‘nothing’ side of the coin. So far, there is nothing we know so far that rules this out. Because, though what he did was risky, the rewards were also quite high.
So we cannot assume that SBF was being bad, or ‘naive’ utilitarian. Because it could instead be the case that SBF was a perfect utilitarian, but utilitarianism is wrong and so perfect utilitarians are bad people. Because utility and integrity are wholly independent variables, so there is no reason for us to assume a priori that they will always correlate perfectly. So if we wish to believe that integrity and expected value correlated for SBF, then we must show it. We must actually do the math. Crunch the numbers for yourself. Don’t rely on thought leaders.
By doing this, it becomes clear that SBF’s actions were very possibly if not probably caused by his utilitarian-minded EV reasoning. Anyone who wishes to deny this can convince me by crunching the numbers and proving me wrong mathematically.
Because a double-or-nothing coin-flip scales; it doesn’t stop having high EV when we start dealing with big bucks.
Risky bets aren’t themselves objectionable in the way that fraud is, but to just address this point narrowly: Realistic estimates puts risky bets at much worse EV when you control a large fraction of the altruistic pool of money. I think a decent first approximation is that EA’s impact scales with the logarithm of its wealth. If you’re gambling a small amount of money, that means you should be ~indifferent to 50⁄50 double or nothing (note that even in this case it doesn’t have positive EV). But if you’re gambling with the majority of wealth that’s predictably committed to EA causes, you should be much more scared about risky bets.
(Also in this case the downside isn’t “nothing” — it’s much worse.)
I think marginal returns probably don’t diminish nearly as quickly as the logarithm for neartermist cause areas, but maybe that’s true for longtermist ones (where FTX/Alameda and associates were disproportionately donating), although my impression is that there’s no consensus on this, e.g. 80,000 Hours has been arguing for donations still being very valuable.
(I agree that the downside (damage to the EA community and trust in EAs) is worse than nothing relative to the funds being gambled, but that doesn’t really affect the spirit of the argument. It’s very easy to underappreciate the downside in practice, though.)
I’d actually guess that longtermism diminishes faster than logarithmic, given how much funders have historically struggled to find good funding opportunities.
Global poverty probably have slower diminishing marginal returns, yeah. Unsure about animal welfare. I was mostly thinking about longtermist causes.
Re 80,000 Hours: I don’t know exactly what they’ve argued, but I think “very valuable” is compatible with logarithmic returns. There are also diminishing marginal returns to direct workers in any given cause, so logarithmic returns on money doesn’t mean that money becomes unimportant compared to people, or anything like that.
I’d especially recommend this part from section 1:
My sense is that the bar within longtermism has come down a little bit compared to a few years ago – back then we weren’t providing much funding for things like PhD programmes, which strike me as somewhat less effective than funding core organisations (though still well worth it).
On the other hand, since longtermism is so new, there is also a lot more potential to generate and discover highly effective opportunities as the capacity of the community grows. It wouldn’t surprise me if the bar stays similar in the coming years.
Again, in a worst case scenario, there are ways that longtermists could deploy billions of dollars and still do a significant amount of good. For instance, CEPI is a $3.5bn programme to develop vaccines to fight the next pandemic – that could easily be topped up by $1bn (ideally restricted to work to develop vaccines for novel pathogens). (See more ideas.) These kinds of scalable opportunities are likely 10-100x less effective than the top longtermist opportunities we’re able to find today, but still very good (and if you put reasonable credence in longtermism, plausibly still more effective than GiveWell recommended charities).
I also expect research will uncover better scalable longtermist donation opportunities in the coming years, which means that investing to give when those opportunities arise is a more attractive option (compared to donors focused on global health).
If longtermism attracts supporters ahead of our expectations, the bar may fall further. But again, society spends less on reducing existential risk than it does on ice cream, so we could spend orders of magnitude more on longtermist aligned issues, and it would still be a minor global priority.
(Extra info on diminishing returns in longtermism: Returns probably diminish faster in longtermism than in neartermism. But longtermists also care more about the all time total amount of resources invested in an issue than how much is invested each year. This means what matters for diminishing returns are changes in how much you expect to be spent in longtermism aligned ways in the future. This means that additional funding only drives down expected returns if it’s ahead of what you already expected to be spent. So we care more about ‘positive surprises’ than changes in the total of committed funds.)
So he thought the marginal cost-effectiveness hadn’t changed much while funding had dramatically increased within longtermism over these years. I suppose it’s possible marginal returns diminish quickly within each year, even if funding is growing quickly over time, though, as long as the capacity to absorb funds at similar cost-effectiveness grows with it.
Personally, I’d guess funding students’ university programs is much less cost-effective on the margin, because of the distribution of research talent, students should already be fully funded if they have a decent shot of contributing, the best researchers will already be fully funded without many non-research duties (like being a teaching assistant), and other promising researchers can get internships at AI labs both for valuable experience (80,000 Hours recommends this as a career path!) and to cover their expenses.
I also got the impression that the Future Fund’s bar was much lower, but I think this was after Ben Todd’s post.
Caroline Ellison literally says this in a blog post:
“If you abstract away the financial details there’s also a question of like, what your utility function is. Is it infinitely good to do double-or-nothing coin flips forever? Well, sort of, because your upside is unbounded and your downside is bounded at your entire net worth. But most people don’t do this, because their utility is more like a function of their log wealth or something and they really don’t want to lose all of their money. (Of course those people are lame and not EAs; this blog endorses double-or-nothing coin flips and high leverage.)”
It looks like the tumblr was actually deleted, unfortunately. I spent quite a bit of time going through it last night because I saw screenshots of it going around.
Because utility and integrity are wholly independent variables, so there is no reason for us to assume a priori that they will always correlate perfectly. So if we wish to believe that integrity and expected value correlated for SBF, then we must show it. We must actually do the math.
This feels a bit unfair when people (i) have argued that utility and integrity will correlate strongly in practical cases (why use “perfectly” as your bar?), and (ii) that they will do so in ways that will be easy to underestimate if you just “do the math”.
You might think they’re mistaken, but some of the arguments do specifically talk about why the “assume 0 correlation and do the math”-approach works poorly, so if you disagree it’d be nice if you addressed that directly.
Utility and integrity coming apart, and in particular deception for gain, is one of the central concerns of AI safety. Shouldn’t we similarly be worried at the extremes even in human consequentialists?
It is somewhat disanalogous, though, because
We don’t expect one small group of humans to have so much power without the need to cooperate with others, like might be the case for an AGI taking over. Furthermore, the FTX/Alameda leaders had goals that were fairly aligned with a much larger community (the EA community), whose work they’ve just made harder.
Humans tend to inherently value integrity, including consequentialists. However, this could actually be a bias among consequentialists that consequentialists should seek to abandon, if we think integrity and utility should come apart at the extremes and we should go for the extremes.
(EDIT) Humans are more limited cognitively than AGIs, and are less likely to identify net positive deceptive acts and more likely to identify net negative one than AGIs.
EDIT: On the other hand, maybe we shouldn’t trust utilitarians with AGIs aligned with their own values, either.
Assuming zero correlation between two variables is standard practice. Because for any given set of two variables, it is very likely that they do not correlate. Anyone that wants to disagree must crunch the numbers and disprove it. That’s just how math works.
And if we want to treat ethics like math, then we need to actually do some math. We can’t have our cake and eat it too
I’m not sure how literally you mean “disprove”, but at it’s face, “assume nothing is related to anything until you have proven otherwise” is a reasoning procedure that will never recommend any action in the real world, because we never get that kind of certainty. When humans try to achieve results in the real world, heuristics, informal arguments, and looking at what seems to have worked ok in the past are unavoidable.
I am talking about math. In math, we can at least demonstrate things for certain (and prove things for certain, too, though that is admittedly not what I am talking about).
But the point is that we should at least be to bust out our calculators and crunch the numbers. We might not know if these numbers apply to the real world. That’s fine. But at least we have the numbers. And that counts for something.
For example, we can know roughly how much wealth SBF was gambling. We can give that a range. We also can estimate how much risk he was taking on. We can give that a range too. Then we can calculate if the risk he took on had net positive expected value in expectation
It’s possible that it has expected value in expectation, only above a certain level of risk, or whatever. Perhaps we do not know whether he faced this risk. That is fine. But we can still at any rate see in under what circumstances SBF would have been rational, acting on utilitarian grounds, to do what he did.
If these circumstances sound like do or could describe the circumstances that SBF was in earlier this week, then that should give us reason to pause.
Except the textbook literally warns about this sort of thing:
This is a generalizable defense of utilitarianism against a wide range of alleged counterexamples. Such “counterexamples” invite us to imagine that a typically-disastrous class of action (such as killing an innocent person) just so happens, in this special case, to produce the best outcome. But the agent in the imagined case generally has no good basis for discounting the typical risk of disaster. So it would be unacceptably risky for them to perform the typically-disastrous act.3 We maximize expected value by avoiding such risks.4 For all practical purposes, utilitarianism recommends that we should refrain from rights-violating behaviors.
Again, warnings against naive utilitarianism have been central to utilitarian philosophy right from the start. If I could sear just one sentence into the brains of everyone thinking about utilitarianism right now, it would be this: If your conception of utilitarianism renders it *predictably* harmful, then you’re thinking about it wrong.
There’s the case that such distinctions are too complex for a not insignificant proportion of the public and therefore utilitarianism should not be promoted at all for a larger audience, since all the textbooks filled with nuanced discussion will collapse to a simple heuristic in the minds of some, such as ‘ends justifying the means’ (which is obviously false).
I don’t think we should be dishonest. Given the strong case for utilitarianism in theory, I think it’s important to be clear that it doesn’t justify criminal or other crazy reckless behaviour in practice. Anyone sophisticated enough to be following these discussions in the first place should be capable of grasping this point.
If you just mean that we shouldn’t promote context-free, easily-misunderstood utilitarian slogans in superbowl ads or the like, then sure, I think that goes without saying.
It’s quite evident people do follow discussions on utilitarianism but fail to understand the importance of integrity in a utilitarian framework, especially if one is unfamiliar with Kant. If the public finds SBF’s system of moral beliefs to blame for his actions, it will most likely be for being too utilitarian rather than not being utilitarian enough – a misunderstanding which will be difficult to correct.
Are you disagreeing with something I’ve said? I’m not seeing the connection. (I obviously agree that many people currently misunderstand utilitarianism, or I wouldn’t spend my time trying to correct those misunderstandings.)
Given the strong case for utilitarianism in theory, I think it’s important to be clear that it doesn’t justify criminal or other crazy reckless behaviour in practice.
Why should we trust you? You’re a known utilitarian philosopher. You could be lying to us right now to rehabilitate EA’s image. That’s what a utilitarian would do, after all. And you have not provided any arguments for this that are even remotely convincing, neither here nor in your post on the topic.
What are you using to justify these conclusions? EV? Is it an empirical claim? How do you know? What kind of justification are you using? And can you show us your justification? Can you show us the EV calculus? Or, if it’s empirical, then can you show us the evidence? No? So far I am seeing no arguments from you. Just assertions.
Anyone sophisticated enough to be following these discussions in the first place should be capable of grasping this point.
Really? SBF seemed pretty sophisticated. But he didn’t get the point. So maybe it’s time to update your “empirical” argument against utilitarianism being self-effacing, then.
If you just mean that we shouldn’t promote context-free, easily-misunderstood utilitarian slogans in superbowl ads or the like
Yeah.… don’t think publius said that. Maybe stop misrepresenting the views of people who disagree with you. You seem to do that a lot.
As a moderator, I think some elements of this and previous comments break Forum norms. Specifically, unsubstantiated accusations of lying or misrepresentation and phrases like “when has a utilitarian ever cared about common sense” are unnecessarily rude and do not reflect a generous and collaborative mindset.
We want to be clear that this comment is in response to the tone and approach, not the stance taken by the commenter. As a moderator team we believe it’s really important to be able to discuss all perspectives on the situation with an open mind and without censoring any perspectives.
Was anything I said an “unsubstantiated accusation of lying”?
No. Perhaps it was an accusation. But it was not unsubstantiated. It was substantiated. Because I provided a straightforward argument as to why utilitarians cannot be trusted in this situation.
If you disagree with the conclusion of this argument, that’s fine. But the proper response to that is to explain why you think the argument is unsound. Not to use your mod powers.
So, then, let me ask you: why do you think this argument is unsound (assuming that you do)?
If you cannot answer this question, then you cannot honestly say that my “accusation” was unsubstantiated.
Something similar applies to my other question: “when has a utilitarian ever cared about common sense?” If you care to provide examples, I’d be happy to hear you out. Because that is why I asked the question.
But if you cannot find examples (and so do not like what the answer to my question may be), then I fail to see how that is my fault. Is asking critical questions “rude”? If yes, then quite frankly that reflects poorly on the “Form norms”.
As does, by the way, the selective enforcement of these norms. I know that some moderators insist that enforcement of Forum norms has nothing to do with the offender’s point-of-view. But it does not take a PhD in critical analysis to see this as plainly false.
Since, as any impartial lurker on the forum could tell you, there are a handful of high-status dogmatists on here that consistently misrepresent the views of those that disagree with them; misrepresent expert consensus; and are rude, condescending, arrogant, and combative.
(Note: I am not naming names, here, so no accusation is being made. But you know who they are. And if you don’t, that speaks to the strength of the in-group bias endemic to EA.)
But I have yet to see any one of these individuals get a “warning” from a moderator. And no one who I’ve discussed this issue with has either. So, it is genuinely hard to believe that these norms are not being enforced selectively.
In fairness, sometimes the rules are necessary. I get that. You want to keep things civil, and fair enough. But it’s plainly obvious that the rules are often abused, too.
This cycle of abuse is as follows.
Someone disagrees with the predominant EA in-group thinking.
Said person voices their concern with said in-group thinking on the Forum.
Said person is met with character assassinations, misrepresentations and strawmen arguments, ad hominens, and so on. This violates Forum norms, but these norms are not enforced.
Said person is not a saint. So, they respond to this onslaught of hostility with hostility in turn. This time, Forum norms are conveniently enforced.
Said person is now deemed to be arguing “in bad faith”.
Said person’s concerns (expressed in step 2) are now dismissed out of hand on account of the allegation that they were made in bad faith. So the relevant concerns expressed in step 2 go unaddressed. The echo-chamber intensifies. The Overton window narrows.
No one seems to clue into the fact that accusing someone of bad faith is, ironically enough, itself an ad hominen.
EAs continue to go on not knowing what they don’t know, and so thinking that they know everything.
Rinse and repeat for several years.
Hubris balloons to dangerously high levels.
FTX crashes.
And now we are here.
Note that steps 1-7 describe what happened to Emile Torres. Which is a shame, since many of the criticisms he expressed back in step 2 were, as it happens, correct (as, by now, should be obvious).
So perhaps if Torres hadn’t been banned, then we would have taken his concerns seriously. And perhaps if we took his concerns seriously, then none of this would have happened. Whoops. That’s a bad look, don’t you think?
So it’s worth noting, then, that the concerns I am forwarding here aren’t very different from the concerns that got Torres banned all those years ago. So, given what has since transpired, maybe it’s about time we take these concerns seriously. Because it was one thing to use mod powers to silence Torres when he made these critiques back then (please don’t play dumb, we both know it’s true). But to use mod powers to intimidate people for these same criticisms, even now, despite everything… that’s unconscionable.
I know you don’t like to hear that. But quite frankly, you need to hear it, because it’s true. I doubt that will be much comfort to you, though, so you’ll probably ban me for saying that. But once your power trip has ended, consider digging deep. Do some serious critical reflection. And then do better next time.
And I don’t mean, by the way, that you should do better as a moderator (though that is of course part of it). No. My request goes much deeper than this. I am requesting that you be better as a person. Be a better person than this. Be a better person than this.
Be honest with yourself. Have some integrity. Update your beliefs. And then accept your share of the responsibility for this mess.
But, most importantly:have some fucking shame.
Please.
It’s well overdue. Not just for you, but for all of us. Because we all contributed to this mess, in however minor a way.
Anyway. I think that’s everything I needed to say.
So, closing remarks: please don’t mistake my tough love for hostility. I understand that this is a tough time for everyone, and probably the mods especially. So, for that, I wish you all well. Genuinely. I really do wish you guys well. But, after the dust has settled, you all really need to think this stuff through. Reflect on what I said here. Really chew on it. Then do better going forward.
You could be lying to us right now to rehabilitate EA’s image.
I referenced work to this effect from my decade-old PhD dissertation, along with published articles and books from prior utilitarians, none of which could possibly have been written with “rehabilitating EA’s image” in mind.
Randomly accusing people of lying is incredibly jerkish behaviour. I’ve been arguing for almost two decades now that utilitarianism calls for honest and straightforward behaviour. (And anyone who knows me IRL can vouch for my personal integrity.) You have zero basis for making these insulting accusations. Please desist.
What are you using to justify these conclusions? EV? Is it an empirical claim?
My post on naive utilitarianism, like other academic literature on the topic (including, e.g., more drastic claims from Bernard Williams et al. that utilitarianism is outright self-effacing, or arguments by rule consequentialists like Brad Hooker), invokes common-sense empirical knowledge, drawing attention to the immense potential downside from reputational risks alongside other grounds for distrusting direct calculations as unreliable when they violate well-established moral rules.
Again, there’s a huge academic literature on this. You don’t have to trust me personally, I’m just trying to summarize some basic points.
Maybe stop misrepresenting...
What are you talking about? Publius referenced the idea that this may be “too complex for a not insignificant proportion of the public and therefore utilitarianism should not be promoted at all for a larger audience”. This could be interpreted in different (stronger or weaker) ways, depending on what one has in mind by “larger audiences”. My reply argued against a strong interpretation, and then indicated that I agreed with a weaker interpretation.
So let’s restrict our scope to SBF’s decision-making within the past few years. It is an open question: were SBF’s decisions consistent with utilitarian-minded EV reasoning?
And we can start to answer this question. We can quantify the money he was dealing with, and his potential earnings. We can quantify the range of risk he was likely dealing with. We can provide a reasonable range as to the negative consequences of him getting caught. We can plug all these numbers into our EV calculus. It is the results of these equations that we are currently discussing.
So some vague and artificial thought experiments written a decade ago is not especially relevant. Not unless you happened to run these specific EV calculations into your PhD dissertation. But given the fact that you are a mere mortal and so cannot predict the future, I doubt that you did.
My post on naive utilitarianism, like other academic literature on the topic (including, e.g., more drastic claims from Bernard Williams et al. that utilitarianism is outright self-effacing, or arguments by rule consequentialists like Brad Hooker), invokes common-sense empirical knowledge, drawing attention to the immense potential downside from reputational risks alongside other grounds for distrusting direct calculations as unreliable when they violate well-established moral rules.
Your post is hardly “academic literature” (was it peer reviewed? Or just upvoted by many philosophically naive EAs?).
And it is common-sense empirical knowledge that SBF did what he did due to his utilitarianism + EV reasoning. It is currently only on this forum where this incredibly obvious fact is being seriously questioned.
And, besides, when has a utilitarian ever cared about common sense?
What are you talking about?
Do you think you represented your opponent’s view in the most charitable way possible? Do you think a superbowl commercial is a charitable example to be giving? Do you think that captures the essence of the critique? Or is it merely a cartoonish example, strategically chosen to make the critique look silly?
You don’t have to trust me personally
It’s not you personally. It’s utilitarians in general. Like I said in my original comment: it is wholly unsurprising that public facing EAs are currently denying that ends justify means. Because they are in damage control mode. They are tying to tame the onslaught of negative PR that EA is now getting. So even if they thought that the ends did justify the means, they would probably lie about it. Because the ends (better PR) would justify the means (lying). So we cannot simply take these people at their word. Because whatever they truly believe, we should expect their answers to be the same.
So why should we have any reason to trust any utilitarian right now? And again, I am referring to this particular situation—pointing to defences of utilitarianism written in the 1970s is not especially relevant, since they did not account for SBFs particular situation, which is what we are currently discussing.
As I’m sure you’ll find, it’s pretty difficult to provide any reason why we should trust a utilitarian’s views on the SBF debacle. Perhaps that’s a problem for utilitarianism. We can add it to the collection.
People believing utilitarianism could be predictably harmful, even if the theory actually says not to do the relevant harmful things. (Not endorsing this view: I think if you’ve actually spent time socially in academic philosophy, it is hard to believe that people who profess to be utilitarians are systematically more or less trustworthy than anyone else.)
As someone who has doubts about track record arguments for utilitarianism, I want to go on the record as saying I think that cuts both ways – that I don’t think SBF’s actions are a reason to think utilitarianism is false or bad (nor true or good).
Like, in order to evaluate a person’s actions morally we already need a moral theory in place. So the moral theory needs to be grounded in something else (like for example intuitions, human nature and reasoned argument).
Sure, it’s possible that misunderstandings of the theory could prove harmful. I think that’s a good reason to push back against those misunderstandings!
I’m not a fan of the “esoteric” reasoning that says we should hide the truth because people are too apt to misuse it. I grant it’s a conceptual possibility. But, in line with my general wariness of naive utilitarian reasoning, my priors strongly favour norms of openness and truth-seeking as the best way to ward off these problems.
Interesting, thanks. This quote from SBF’s blog is particularly revealing:
The argument, roughly goes: when computing expected impact of causes, mine is 10^30 times higher than any other, so nothing else matters. For instance, there are 10^58 future humans, so increasing the odds that they exist by even .0001% is still worth 10^44 times more important that anything that impacts current humans.
Here SBF seems to be going full throttle on his utilitarianism and EV reasoning. It’s worth noting that many prominent leaders in EA also argue for this sort of thing in their academic papers (their public facing work is usually more tame).
For example, here’s a quote from Nick Bostrom (head huncho at the Future of Humanity Institute). He writes:
Given these estimates, it follows that the potential for approximately 10^38 human lives is lost every century that colonization of our local supercluster is delayed; or equivalently, about 10^29 potential human lives per second.
On these estimates, $1 billion of spending would provide at least a 0.001% absolute reduction in existential risk. That would mean that every $100 spent had, on average, an impact as valuable as saving one trillion (resp., one million, 100) lives on our main (resp. low, restricted) estimate – far more than the near-future benefits of bednet distribution (p. 15).
This seems very different from Will’s recent tweets, where he denied that the ends justified the means (because, surely, if 100 dollars could save a trillion lives, then we’d be justified in stealing 100 dollars?)
Anyway. It seems like SBF took these arguments to heart. And here we are.
Note that from a utilitarian point of view, none of this really matters much. Here’s another quote from Nick Bostrom (section 2, first paragraph):
Our intuitions and coping strategies have been shaped by our long experience with risks such as dangerous animals, hostile individuals or tribes, poisonous foods, automobile accidents, Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts, World War I, World War II, epidemics of influenza, smallpox, black plague, and AIDS. These types of disasters have occurred many times and our cultural attitudes towards risk have been shaped by trial-and-error in managing such hazards. But tragic as such events are to the people immediately affected, in the big picture of things – from the perspective of humankind as a whole – even the worst of these catastrophes are mere ripples on the surface of the great sea of life. They haven’t significantly affected the total amount of human suffering or happiness or determined the long-term fate of our species.
So if all wars and pandemics in human history are “mere ripples” from a utilitarian standpoint, then what does this FTX scandal amount to?
Probably not much. It is very bad, to be sure, but only because it is very bad PR. The fact that SBF committed massive financial fraud is not, in itself, of any issue. So the people immediately affected by this are mere rounding errors on spreadsheets, from a utilitarian standpoint. So the expressions of remorse currently being given by EA leaders… are those real?
If these leaders take utilitarianism seriously, then probably not.
And when the leaders in EA claim to care, are they being honest? Is the apology tour genuine, or just an act?
To answer this, we need to think like a utilitarian. Why would a utilitarian care about a mere ripple? That makes no sense. But why would a utilitarian pretend to care about a mere ripple? Well, for good PR, of course. So we cannot take anything that any EA thought-leader says. These people have not earned our trust.
And on that note: if the EA thought-leaders are lying to us, then this has serious implications for the movement. Because our goal here is to do the most good. And so far it seems like the utilitarianism that has infected the minds of EA elites is preventing us from doing that. Since the utilitarian vision of the good seems not so good after all.
So we need to seriously consider the possibility, then, that the biggest obstacle facing the EA movement is the current EA leadership.
And if that’s the case, then waiting on them to fix this mess from the top-down might be hopeless. Change needs to come from us, in spite of the leadership.
I think the quotes from Sam’s blog are very interesting and are pretty strong evidence for the view that Sam’s thinking and actions were directly influenced by some EA ideas.
I think the thinking around EA leadership is way too premature and presumptive. There are many years (like a decade?) of EA leadership generally being actually good people and not liars. There are also explicit calls in “official” EA sources that specifically say that the ends do not justify the means in practice, honesty and integrity are important EA values, and pluralism and moral humility are important (which leads to not doing things that would transgress other reasonable moral views).
Most of the relevant documentation is linked in Will’s post.
Edit: After reading the full blog post, the quote is actually Sam presenting the argument that one can calculate which cause is highest priority, the rest be damned.
He goes on to say in the very next paragraph:
This line of thinking is implicitly assuming that the impacts of causes add together rather than multiply, and I think that’s probably not a very good model.
He concludes the post by stating that the multiplicative model, which he thinks is more likely, indicates that both reducing x-risk and improving the future are important.
None of this proves anything. But it’s significantly changed my prior, and I now think it’s likely that the EA movement should heavily invest in multiple causes, not just one.
There’s another post on that same page where he denotes his donations for 2016 and they include donations to x-risk and meta EA orgs, as well as donations to global health and animal welfare orgs.
So nevermind, I don’t think those blog posts are positive evidence for Sam being influenced by EA ideas to think that present people don’t matter or that fraud is justified.
Ya, they aren’t really talking about the numbers, even though a utilitarian should probably accept instrumental harm to innocents for a large enough benefit, at least in theory. Maybe they distrust this logic so much in practice, possibly based on historical precedent like communism, that they endorse a general rule against it. But it would still be good to see some numbers.
I read that the Future Fund has granted something like $200 million already, and FTX/Alameda leadership invested probably something like half a billion dollars in Anthropic. And they were probably expecting to donate more. Pesumably they didn’t expect to get caught or have a bank run, at least not this soon. Maybe they even expected that they could eventually make sure they had enough cash to cover all customer investments, so no customer would actually ever be harmed even in the case of a bank run (although they’d still be exposed to risks they were lied to about until then). Plausibly they underestimated the risk of getting caught, but maybe by their own lights, it’ll already have been worth it even with getting caught, as long as the EA community doesn’t pay it all back.
If our integrity, public trust/perception, lost potential EAs and ability to cooperate with others are worth this much, should we* just pay everything we got from FTX and associates back to FTX customers? And maybe more for deterence and the cases that don’t get caught?
*possibly our major funders, not individual grantees.
It is wholly unsurprising that public facing EAs are currently denying that ends justify means. Because they are in damage control mode. They are tying to tame the onslaught of negative PR that EA is now getting. So even if they thought that the ends did justify the means, they would probably lie about it.
I think that’s part of why Will etc are giving lots of examples of things they said publicly before FTX exploded where they argued against this kind of reasoning.
I think there may be two separate actions to analyze here: the decisions to take extreme risks with FTX/Alameda’s own assets to start with, and the decision to convert customer funds in an attempt to prevent Alameda, FTT , FTX, SBF, and the Future Fund from collapsing in that order.
If that is true, it isnt an answer to say SBF shouldn’t have been taking extreme risks with a huge fraction of EA aligned money. At the time the fraud / no fraud decision was to be made, that may no longer have been an option.
So EA needs to be clear on whether SBF should have allowed his wealth / much of the EA Treasury to collapse rather than risk/convert customer funds, because that may have been the choice he was faced with a week ago.
Fair enough. I tried to explain that they were different in the comment section of another post, but was meet with downvotes and whole walls of text trying to argue with me. So I’ve largely given up trying to make those distinctions clear on this forum. It’s too tiresome
I believe the ‘walls of text’ that Adrian is referring to are mine. I’d just like to clarify that I was not trying to collapse the distinction between a decision procedure and the rightness criterion of utilitarianism. I was merely arguing that the concept of expected value can be used both to decide what action should be taken (at least in certain circumstances)[1] and whether an action is / was morally right (arguably in all circumstances) - indeed, this is a popular formulation of utilitarianism. I was also trying to point out that whether an action is good, ex ante, is not necessarily identical to whether the consequences of that action are good, ex post. If anyone wants more detail you can view my comments here.
Re: do the ends justify the means?
It is wholly unsurprising that public facing EAs are currently denying that ends justify means. Because they are in damage control mode. They are tying to tame the onslaught of negative PR that EA is now getting. So even if they thought that the ends did justify the means, they would probably lie about it. Because the ends (better PR) would justify the means (lying). So we cannot simply take these people at their word. Because whatever they truly believe, we should expect their answers to be the same.
Let’s think for ourselves, then. Would utilitarianism ever justify making high-stakes high-reward bets? Yes, of course. Could that be what SBF was doing? Quite possibly. Because a double-or-nothing coin-flip scales; it doesn’t stop having high EV when we start dealing with big bucks. So perhaps SBF was simply being a good utilitarian and did whatever had the highest value in expectation. Only this time he landed on the ‘nothing’ side of the coin. So far, there is nothing we know so far that rules this out. Because, though what he did was risky, the rewards were also quite high.
So we cannot assume that SBF was being bad, or ‘naive’ utilitarian. Because it could instead be the case that SBF was a perfect utilitarian, but utilitarianism is wrong and so perfect utilitarians are bad people. Because utility and integrity are wholly independent variables, so there is no reason for us to assume a priori that they will always correlate perfectly. So if we wish to believe that integrity and expected value correlated for SBF, then we must show it. We must actually do the math. Crunch the numbers for yourself. Don’t rely on thought leaders.
By doing this, it becomes clear that SBF’s actions were very possibly if not probably caused by his utilitarian-minded EV reasoning. Anyone who wishes to deny this can convince me by crunching the numbers and proving me wrong mathematically.
Risky bets aren’t themselves objectionable in the way that fraud is, but to just address this point narrowly: Realistic estimates puts risky bets at much worse EV when you control a large fraction of the altruistic pool of money. I think a decent first approximation is that EA’s impact scales with the logarithm of its wealth. If you’re gambling a small amount of money, that means you should be ~indifferent to 50⁄50 double or nothing (note that even in this case it doesn’t have positive EV). But if you’re gambling with the majority of wealth that’s predictably committed to EA causes, you should be much more scared about risky bets.
(Also in this case the downside isn’t “nothing” — it’s much worse.)
I think marginal returns probably don’t diminish nearly as quickly as the logarithm for neartermist cause areas, but maybe that’s true for longtermist ones (where FTX/Alameda and associates were disproportionately donating), although my impression is that there’s no consensus on this, e.g. 80,000 Hours has been arguing for donations still being very valuable.
(I agree that the downside (damage to the EA community and trust in EAs) is worse than nothing relative to the funds being gambled, but that doesn’t really affect the spirit of the argument. It’s very easy to underappreciate the downside in practice, though.)
I’d actually guess that longtermism diminishes faster than logarithmic, given how much funders have historically struggled to find good funding opportunities.
Global poverty probably have slower diminishing marginal returns, yeah. Unsure about animal welfare. I was mostly thinking about longtermist causes.
Re 80,000 Hours: I don’t know exactly what they’ve argued, but I think “very valuable” is compatible with logarithmic returns. There are also diminishing marginal returns to direct workers in any given cause, so logarithmic returns on money doesn’t mean that money becomes unimportant compared to people, or anything like that.
(I didn’t vote on your comment.)
Here’s Ben Todd’s post on the topic from last November:
Despite billions of extra funding, small donors can still have a significant impact
I’d especially recommend this part from section 1:
So he thought the marginal cost-effectiveness hadn’t changed much while funding had dramatically increased within longtermism over these years. I suppose it’s possible marginal returns diminish quickly within each year, even if funding is growing quickly over time, though, as long as the capacity to absorb funds at similar cost-effectiveness grows with it.
Personally, I’d guess funding students’ university programs is much less cost-effective on the margin, because of the distribution of research talent, students should already be fully funded if they have a decent shot of contributing, the best researchers will already be fully funded without many non-research duties (like being a teaching assistant), and other promising researchers can get internships at AI labs both for valuable experience (80,000 Hours recommends this as a career path!) and to cover their expenses.
I also got the impression that the Future Fund’s bar was much lower, but I think this was after Ben Todd’s post.
Caroline Ellison literally says this in a blog post:
“If you abstract away the financial details there’s also a question of like, what your utility function is. Is it infinitely good to do double-or-nothing coin flips forever? Well, sort of, because your upside is unbounded and your downside is bounded at your entire net worth. But most people don’t do this, because their utility is more like a function of their log wealth or something and they really don’t want to lose all of their money. (Of course those people are lame and not EAs; this blog endorses double-or-nothing coin flips and high leverage.)”
So no, I don’t think anyone can deny this.
Link?
https://at.tumblr.com/worldoptimization/slatestarscratchpad-all-right-more-really-stupid/8ob0z57u66zr
EDIT: The tumblr has been taken down.
EDIT #2: Someone archived it: https://web.archive.org/web/20210625103706/https://worldoptimization.tumblr.com/
That link doesn’t work for me. Do you have another one, or has it been taken down?
It looks like the tumblr was actually deleted, unfortunately. I spent quite a bit of time going through it last night because I saw screenshots of it going around.
Hey @Lin BL, someone archived it! I just found this link:
https://web.archive.org/web/20210625103706/https://worldoptimization.tumblr.com/
This feels a bit unfair when people (i) have argued that utility and integrity will correlate strongly in practical cases (why use “perfectly” as your bar?), and (ii) that they will do so in ways that will be easy to underestimate if you just “do the math”.
You might think they’re mistaken, but some of the arguments do specifically talk about why the “assume 0 correlation and do the math”-approach works poorly, so if you disagree it’d be nice if you addressed that directly.
Utility and integrity coming apart, and in particular deception for gain, is one of the central concerns of AI safety. Shouldn’t we similarly be worried at the extremes even in human consequentialists?
It is somewhat disanalogous, though, because
We don’t expect one small group of humans to have so much power without the need to cooperate with others, like might be the case for an AGI taking over. Furthermore, the FTX/Alameda leaders had goals that were fairly aligned with a much larger community (the EA community), whose work they’ve just made harder.
Humans tend to inherently value integrity, including consequentialists. However, this could actually be a bias among consequentialists that consequentialists should seek to abandon, if we think integrity and utility should come apart at the extremes and we should go for the extremes.
(EDIT) Humans are more limited cognitively than AGIs, and are less likely to identify net positive deceptive acts and more likely to identify net negative one than AGIs.
EDIT: On the other hand, maybe we shouldn’t trust utilitarians with AGIs aligned with their own values, either.
Assuming zero correlation between two variables is standard practice. Because for any given set of two variables, it is very likely that they do not correlate. Anyone that wants to disagree must crunch the numbers and disprove it. That’s just how math works.
And if we want to treat ethics like math, then we need to actually do some math. We can’t have our cake and eat it too
I’m not sure how literally you mean “disprove”, but at it’s face, “assume nothing is related to anything until you have proven otherwise” is a reasoning procedure that will never recommend any action in the real world, because we never get that kind of certainty. When humans try to achieve results in the real world, heuristics, informal arguments, and looking at what seems to have worked ok in the past are unavoidable.
I am talking about math. In math, we can at least demonstrate things for certain (and prove things for certain, too, though that is admittedly not what I am talking about).
But the point is that we should at least be to bust out our calculators and crunch the numbers. We might not know if these numbers apply to the real world. That’s fine. But at least we have the numbers. And that counts for something.
For example, we can know roughly how much wealth SBF was gambling. We can give that a range. We also can estimate how much risk he was taking on. We can give that a range too. Then we can calculate if the risk he took on had net positive expected value in expectation
It’s possible that it has expected value in expectation, only above a certain level of risk, or whatever. Perhaps we do not know whether he faced this risk. That is fine. But we can still at any rate see in under what circumstances SBF would have been rational, acting on utilitarian grounds, to do what he did.
If these circumstances sound like do or could describe the circumstances that SBF was in earlier this week, then that should give us reason to pause.
Fair.
TBH, this has put me off of utilitarianism somewhat. Those silly textbook counter-examples to utilitarianism don’t look quite so silly now.
Except the textbook literally warns about this sort of thing:
Again, warnings against naive utilitarianism have been central to utilitarian philosophy right from the start. If I could sear just one sentence into the brains of everyone thinking about utilitarianism right now, it would be this: If your conception of utilitarianism renders it *predictably* harmful, then you’re thinking about it wrong.
There’s the case that such distinctions are too complex for a not insignificant proportion of the public and therefore utilitarianism should not be promoted at all for a larger audience, since all the textbooks filled with nuanced discussion will collapse to a simple heuristic in the minds of some, such as ‘ends justifying the means’ (which is obviously false).
I don’t think we should be dishonest. Given the strong case for utilitarianism in theory, I think it’s important to be clear that it doesn’t justify criminal or other crazy reckless behaviour in practice. Anyone sophisticated enough to be following these discussions in the first place should be capable of grasping this point.
If you just mean that we shouldn’t promote context-free, easily-misunderstood utilitarian slogans in superbowl ads or the like, then sure, I think that goes without saying.
It’s quite evident people do follow discussions on utilitarianism but fail to understand the importance of integrity in a utilitarian framework, especially if one is unfamiliar with Kant. If the public finds SBF’s system of moral beliefs to blame for his actions, it will most likely be for being too utilitarian rather than not being utilitarian enough – a misunderstanding which will be difficult to correct.
Are you disagreeing with something I’ve said? I’m not seeing the connection. (I obviously agree that many people currently misunderstand utilitarianism, or I wouldn’t spend my time trying to correct those misunderstandings.)
Why should we trust you? You’re a known utilitarian philosopher. You could be lying to us right now to rehabilitate EA’s image. That’s what a utilitarian would do, after all. And you have not provided any arguments for this that are even remotely convincing, neither here nor in your post on the topic.
What are you using to justify these conclusions? EV? Is it an empirical claim? How do you know? What kind of justification are you using? And can you show us your justification? Can you show us the EV calculus? Or, if it’s empirical, then can you show us the evidence? No? So far I am seeing no arguments from you. Just assertions.
Really? SBF seemed pretty sophisticated. But he didn’t get the point. So maybe it’s time to update your “empirical” argument against utilitarianism being self-effacing, then.
Yeah.… don’t think publius said that. Maybe stop misrepresenting the views of people who disagree with you. You seem to do that a lot.
Do you talk like that to your students?
As a moderator, I think some elements of this and previous comments break Forum norms. Specifically, unsubstantiated accusations of lying or misrepresentation and phrases like “when has a utilitarian ever cared about common sense” are unnecessarily rude and do not reflect a generous and collaborative mindset.
We want to be clear that this comment is in response to the tone and approach, not the stance taken by the commenter. As a moderator team we believe it’s really important to be able to discuss all perspectives on the situation with an open mind and without censoring any perspectives.
We strongly encourage all users to approach discussions in good faith, especially when disagreeing—attacking the character of an author rather than the substance of their arguments is discouraged. This is a warning, please do better in the future.
Was anything I said an “unsubstantiated accusation of lying”?
No. Perhaps it was an accusation. But it was not unsubstantiated. It was substantiated. Because I provided a straightforward argument as to why utilitarians cannot be trusted in this situation.
If you disagree with the conclusion of this argument, that’s fine. But the proper response to that is to explain why you think the argument is unsound. Not to use your mod powers.
So, then, let me ask you: why do you think this argument is unsound (assuming that you do)?
If you cannot answer this question, then you cannot honestly say that my “accusation” was unsubstantiated.
Something similar applies to my other question: “when has a utilitarian ever cared about common sense?” If you care to provide examples, I’d be happy to hear you out. Because that is why I asked the question.
But if you cannot find examples (and so do not like what the answer to my question may be), then I fail to see how that is my fault. Is asking critical questions “rude”? If yes, then quite frankly that reflects poorly on the “Form norms”.
As does, by the way, the selective enforcement of these norms. I know that some moderators insist that enforcement of Forum norms has nothing to do with the offender’s point-of-view. But it does not take a PhD in critical analysis to see this as plainly false.
Since, as any impartial lurker on the forum could tell you, there are a handful of high-status dogmatists on here that consistently misrepresent the views of those that disagree with them; misrepresent expert consensus; and are rude, condescending, arrogant, and combative.
(Note: I am not naming names, here, so no accusation is being made. But you know who they are. And if you don’t, that speaks to the strength of the in-group bias endemic to EA.)
But I have yet to see any one of these individuals get a “warning” from a moderator. And no one who I’ve discussed this issue with has either. So, it is genuinely hard to believe that these norms are not being enforced selectively.
In fairness, sometimes the rules are necessary. I get that. You want to keep things civil, and fair enough. But it’s plainly obvious that the rules are often abused, too.
This cycle of abuse is as follows.
Someone disagrees with the predominant EA in-group thinking.
Said person voices their concern with said in-group thinking on the Forum.
Said person is met with character assassinations, misrepresentations and strawmen arguments, ad hominens, and so on. This violates Forum norms, but these norms are not enforced.
Said person is not a saint. So, they respond to this onslaught of hostility with hostility in turn. This time, Forum norms are conveniently enforced.
Said person is now deemed to be arguing “in bad faith”.
Said person’s concerns (expressed in step 2) are now dismissed out of hand on account of the allegation that they were made in bad faith. So the relevant concerns expressed in step 2 go unaddressed. The echo-chamber intensifies. The Overton window narrows.
No one seems to clue into the fact that accusing someone of bad faith is, ironically enough, itself an ad hominen.
EAs continue to go on not knowing what they don’t know, and so thinking that they know everything.
Rinse and repeat for several years.
Hubris balloons to dangerously high levels.
FTX crashes.
And now we are here.
Note that steps 1-7 describe what happened to Emile Torres. Which is a shame, since many of the criticisms he expressed back in step 2 were, as it happens, correct (as, by now, should be obvious).
So perhaps if Torres hadn’t been banned, then we would have taken his concerns seriously. And perhaps if we took his concerns seriously, then none of this would have happened. Whoops. That’s a bad look, don’t you think?
So it’s worth noting, then, that the concerns I am forwarding here aren’t very different from the concerns that got Torres banned all those years ago. So, given what has since transpired, maybe it’s about time we take these concerns seriously. Because it was one thing to use mod powers to silence Torres when he made these critiques back then (please don’t play dumb, we both know it’s true). But to use mod powers to intimidate people for these same criticisms, even now, despite everything… that’s unconscionable.
I know you don’t like to hear that. But quite frankly, you need to hear it, because it’s true. I doubt that will be much comfort to you, though, so you’ll probably ban me for saying that. But once your power trip has ended, consider digging deep. Do some serious critical reflection. And then do better next time.
And I don’t mean, by the way, that you should do better as a moderator (though that is of course part of it). No. My request goes much deeper than this. I am requesting that you be better as a person. Be a better person than this. Be a better person than this.
Be honest with yourself. Have some integrity. Update your beliefs. And then accept your share of the responsibility for this mess.
But, most importantly: have some fucking shame.
Please.
It’s well overdue. Not just for you, but for all of us. Because we all contributed to this mess, in however minor a way.
Anyway. I think that’s everything I needed to say.
So, closing remarks: please don’t mistake my tough love for hostility. I understand that this is a tough time for everyone, and probably the mods especially. So, for that, I wish you all well. Genuinely. I really do wish you guys well. But, after the dust has settled, you all really need to think this stuff through. Reflect on what I said here. Really chew on it. Then do better going forward.
I referenced work to this effect from my decade-old PhD dissertation, along with published articles and books from prior utilitarians, none of which could possibly have been written with “rehabilitating EA’s image” in mind.
Randomly accusing people of lying is incredibly jerkish behaviour. I’ve been arguing for almost two decades now that utilitarianism calls for honest and straightforward behaviour. (And anyone who knows me IRL can vouch for my personal integrity.) You have zero basis for making these insulting accusations. Please desist.
My post on naive utilitarianism, like other academic literature on the topic (including, e.g., more drastic claims from Bernard Williams et al. that utilitarianism is outright self-effacing, or arguments by rule consequentialists like Brad Hooker), invokes common-sense empirical knowledge, drawing attention to the immense potential downside from reputational risks alongside other grounds for distrusting direct calculations as unreliable when they violate well-established moral rules.
Again, there’s a huge academic literature on this. You don’t have to trust me personally, I’m just trying to summarize some basic points.
What are you talking about? Publius referenced the idea that this may be “too complex for a not insignificant proportion of the public and therefore utilitarianism should not be promoted at all for a larger audience”. This could be interpreted in different (stronger or weaker) ways, depending on what one has in mind by “larger audiences”. My reply argued against a strong interpretation, and then indicated that I agreed with a weaker interpretation.
I’m not talking about your PhD dissertation.
So let’s restrict our scope to SBF’s decision-making within the past few years. It is an open question: were SBF’s decisions consistent with utilitarian-minded EV reasoning?
And we can start to answer this question. We can quantify the money he was dealing with, and his potential earnings. We can quantify the range of risk he was likely dealing with. We can provide a reasonable range as to the negative consequences of him getting caught. We can plug all these numbers into our EV calculus. It is the results of these equations that we are currently discussing.
So some vague and artificial thought experiments written a decade ago is not especially relevant. Not unless you happened to run these specific EV calculations into your PhD dissertation. But given the fact that you are a mere mortal and so cannot predict the future, I doubt that you did.
Your post is hardly “academic literature” (was it peer reviewed? Or just upvoted by many philosophically naive EAs?).
And it is common-sense empirical knowledge that SBF did what he did due to his utilitarianism + EV reasoning. It is currently only on this forum where this incredibly obvious fact is being seriously questioned.
And, besides, when has a utilitarian ever cared about common sense?
Do you think you represented your opponent’s view in the most charitable way possible? Do you think a superbowl commercial is a charitable example to be giving? Do you think that captures the essence of the critique? Or is it merely a cartoonish example, strategically chosen to make the critique look silly?
It’s not you personally. It’s utilitarians in general. Like I said in my original comment: it is wholly unsurprising that public facing EAs are currently denying that ends justify means. Because they are in damage control mode. They are tying to tame the onslaught of negative PR that EA is now getting. So even if they thought that the ends did justify the means, they would probably lie about it. Because the ends (better PR) would justify the means (lying). So we cannot simply take these people at their word. Because whatever they truly believe, we should expect their answers to be the same.
So why should we have any reason to trust any utilitarian right now? And again, I am referring to this particular situation—pointing to defences of utilitarianism written in the 1970s is not especially relevant, since they did not account for SBFs particular situation, which is what we are currently discussing.
As I’m sure you’ll find, it’s pretty difficult to provide any reason why we should trust a utilitarian’s views on the SBF debacle. Perhaps that’s a problem for utilitarianism. We can add it to the collection.
People believing utilitarianism could be predictably harmful, even if the theory actually says not to do the relevant harmful things. (Not endorsing this view: I think if you’ve actually spent time socially in academic philosophy, it is hard to believe that people who profess to be utilitarians are systematically more or less trustworthy than anyone else.)
As someone who has doubts about track record arguments for utilitarianism, I want to go on the record as saying I think that cuts both ways – that I don’t think SBF’s actions are a reason to think utilitarianism is false or bad (nor true or good).
Like, in order to evaluate a person’s actions morally we already need a moral theory in place. So the moral theory needs to be grounded in something else (like for example intuitions, human nature and reasoned argument).
Sure, it’s possible that misunderstandings of the theory could prove harmful. I think that’s a good reason to push back against those misunderstandings!
I’m not a fan of the “esoteric” reasoning that says we should hide the truth because people are too apt to misuse it. I grant it’s a conceptual possibility. But, in line with my general wariness of naive utilitarian reasoning, my priors strongly favour norms of openness and truth-seeking as the best way to ward off these problems.
Also note Sam’s own blog
Interesting, thanks. This quote from SBF’s blog is particularly revealing:
Here SBF seems to be going full throttle on his utilitarianism and EV reasoning. It’s worth noting that many prominent leaders in EA also argue for this sort of thing in their academic papers (their public facing work is usually more tame).
For example, here’s a quote from Nick Bostrom (head huncho at the Future of Humanity Institute). He writes:
That sentence is in the third paragraph.
Then you have Will MacAskill and Hilary Greaves saying stuff like:
This seems very different from Will’s recent tweets, where he denied that the ends justified the means (because, surely, if 100 dollars could save a trillion lives, then we’d be justified in stealing 100 dollars?)
Anyway. It seems like SBF took these arguments to heart. And here we are.
Note that from a utilitarian point of view, none of this really matters much. Here’s another quote from Nick Bostrom (section 2, first paragraph):
So if all wars and pandemics in human history are “mere ripples” from a utilitarian standpoint, then what does this FTX scandal amount to?
Probably not much. It is very bad, to be sure, but only because it is very bad PR. The fact that SBF committed massive financial fraud is not, in itself, of any issue. So the people immediately affected by this are mere rounding errors on spreadsheets, from a utilitarian standpoint. So the expressions of remorse currently being given by EA leaders… are those real?
If these leaders take utilitarianism seriously, then probably not.
And when the leaders in EA claim to care, are they being honest? Is the apology tour genuine, or just an act?
To answer this, we need to think like a utilitarian. Why would a utilitarian care about a mere ripple? That makes no sense. But why would a utilitarian pretend to care about a mere ripple? Well, for good PR, of course. So we cannot take anything that any EA thought-leader says. These people have not earned our trust.
And on that note: if the EA thought-leaders are lying to us, then this has serious implications for the movement. Because our goal here is to do the most good. And so far it seems like the utilitarianism that has infected the minds of EA elites is preventing us from doing that. Since the utilitarian vision of the good seems not so good after all.
So we need to seriously consider the possibility, then, that the biggest obstacle facing the EA movement is the current EA leadership.
And if that’s the case, then waiting on them to fix this mess from the top-down might be hopeless. Change needs to come from us, in spite of the leadership.
I’m not exactly sure how this could be done, but I know there has been some talk about democratizing the CEA and enacting whistleblower protections. I’m not sure how we should implement this, though.
Suggestions are welcome.
I think the quotes from Sam’s blog are very interesting
and are pretty strong evidence for the view that Sam’s thinking and actions were directly influenced by some EA ideas.I think the thinking around EA leadership is way too premature and presumptive. There are many years (like a decade?) of EA leadership generally being actually good people and not liars. There are also explicit calls in “official” EA sources that specifically say that the ends do not justify the means in practice, honesty and integrity are important EA values, and pluralism and moral humility are important (which leads to not doing things that would transgress other reasonable moral views).
Most of the relevant documentation is linked in Will’s post.
Edit: After reading the full blog post, the quote is actually Sam presenting the argument that one can calculate which cause is highest priority, the rest be damned.
He goes on to say in the very next paragraph:
He concludes the post by stating that the multiplicative model, which he thinks is more likely, indicates that both reducing x-risk and improving the future are important.
There’s another post on that same page where he denotes his donations for 2016 and they include donations to x-risk and meta EA orgs, as well as donations to global health and animal welfare orgs.
So nevermind, I don’t think those blog posts are positive evidence for Sam being influenced by EA ideas to think that present people don’t matter or that fraud is justified.
Ya, they aren’t really talking about the numbers, even though a utilitarian should probably accept instrumental harm to innocents for a large enough benefit, at least in theory. Maybe they distrust this logic so much in practice, possibly based on historical precedent like communism, that they endorse a general rule against it. But it would still be good to see some numbers.
I read that the Future Fund has granted something like $200 million already, and FTX/Alameda leadership invested probably something like half a billion dollars in Anthropic. And they were probably expecting to donate more. Pesumably they didn’t expect to get caught or have a bank run, at least not this soon. Maybe they even expected that they could eventually make sure they had enough cash to cover all customer investments, so no customer would actually ever be harmed even in the case of a bank run (although they’d still be exposed to risks they were lied to about until then). Plausibly they underestimated the risk of getting caught, but maybe by their own lights, it’ll already have been worth it even with getting caught, as long as the EA community doesn’t pay it all back.
If our integrity, public trust/perception, lost potential EAs and ability to cooperate with others are worth this much, should we* just pay everything we got from FTX and associates back to FTX customers? And maybe more for deterence and the cases that don’t get caught?
*possibly our major funders, not individual grantees.
I think that’s part of why Will etc are giving lots of examples of things they said publicly before FTX exploded where they argued against this kind of reasoning.
I think there may be two separate actions to analyze here: the decisions to take extreme risks with FTX/Alameda’s own assets to start with, and the decision to convert customer funds in an attempt to prevent Alameda, FTT , FTX, SBF, and the Future Fund from collapsing in that order.
If that is true, it isnt an answer to say SBF shouldn’t have been taking extreme risks with a huge fraction of EA aligned money. At the time the fraud / no fraud decision was to be made, that may no longer have been an option.
So EA needs to be clear on whether SBF should have allowed his wealth / much of the EA Treasury to collapse rather than risk/convert customer funds, because that may have been the choice he was faced with a week ago.
One reaction when reading this is that you might be kind of eliding the difference between utilitarianism per se and expected value decision analysis.
Fair enough. I tried to explain that they were different in the comment section of another post, but was meet with downvotes and whole walls of text trying to argue with me. So I’ve largely given up trying to make those distinctions clear on this forum. It’s too tiresome
I believe the ‘walls of text’ that Adrian is referring to are mine. I’d just like to clarify that I was not trying to collapse the distinction between a decision procedure and the rightness criterion of utilitarianism. I was merely arguing that the concept of expected value can be used both to decide what action should be taken (at least in certain circumstances)[1] and whether an action is / was morally right (arguably in all circumstances) - indeed, this is a popular formulation of utilitarianism. I was also trying to point out that whether an action is good, ex ante, is not necessarily identical to whether the consequences of that action are good, ex post. If anyone wants more detail you can view my comments here.
Although usually other decision procedures, like following general rules, are more advisable, even if one maintains the same rightness criterion.