Interesting, thanks. This quote from SBF’s blog is particularly revealing:
The argument, roughly goes: when computing expected impact of causes, mine is 10^30 times higher than any other, so nothing else matters. For instance, there are 10^58 future humans, so increasing the odds that they exist by even .0001% is still worth 10^44 times more important that anything that impacts current humans.
Here SBF seems to be going full throttle on his utilitarianism and EV reasoning. It’s worth noting that many prominent leaders in EA also argue for this sort of thing in their academic papers (their public facing work is usually more tame).
For example, here’s a quote from Nick Bostrom (head huncho at the Future of Humanity Institute). He writes:
Given these estimates, it follows that the potential for approximately 10^38 human lives is lost every century that colonization of our local supercluster is delayed; or equivalently, about 10^29 potential human lives per second.
On these estimates, $1 billion of spending would provide at least a 0.001% absolute reduction in existential risk. That would mean that every $100 spent had, on average, an impact as valuable as saving one trillion (resp., one million, 100) lives on our main (resp. low, restricted) estimate – far more than the near-future benefits of bednet distribution (p. 15).
This seems very different from Will’s recent tweets, where he denied that the ends justified the means (because, surely, if 100 dollars could save a trillion lives, then we’d be justified in stealing 100 dollars?)
Anyway. It seems like SBF took these arguments to heart. And here we are.
Note that from a utilitarian point of view, none of this really matters much. Here’s another quote from Nick Bostrom (section 2, first paragraph):
Our intuitions and coping strategies have been shaped by our long experience with risks such as dangerous animals, hostile individuals or tribes, poisonous foods, automobile accidents, Chernobyl, Bhopal, volcano eruptions, earthquakes, draughts, World War I, World War II, epidemics of influenza, smallpox, black plague, and AIDS. These types of disasters have occurred many times and our cultural attitudes towards risk have been shaped by trial-and-error in managing such hazards. But tragic as such events are to the people immediately affected, in the big picture of things – from the perspective of humankind as a whole – even the worst of these catastrophes are mere ripples on the surface of the great sea of life. They haven’t significantly affected the total amount of human suffering or happiness or determined the long-term fate of our species.
So if all wars and pandemics in human history are “mere ripples” from a utilitarian standpoint, then what does this FTX scandal amount to?
Probably not much. It is very bad, to be sure, but only because it is very bad PR. The fact that SBF committed massive financial fraud is not, in itself, of any issue. So the people immediately affected by this are mere rounding errors on spreadsheets, from a utilitarian standpoint. So the expressions of remorse currently being given by EA leaders… are those real?
If these leaders take utilitarianism seriously, then probably not.
And when the leaders in EA claim to care, are they being honest? Is the apology tour genuine, or just an act?
To answer this, we need to think like a utilitarian. Why would a utilitarian care about a mere ripple? That makes no sense. But why would a utilitarian pretend to care about a mere ripple? Well, for good PR, of course. So we cannot take anything that any EA thought-leader says. These people have not earned our trust.
And on that note: if the EA thought-leaders are lying to us, then this has serious implications for the movement. Because our goal here is to do the most good. And so far it seems like the utilitarianism that has infected the minds of EA elites is preventing us from doing that. Since the utilitarian vision of the good seems not so good after all.
So we need to seriously consider the possibility, then, that the biggest obstacle facing the EA movement is the current EA leadership.
And if that’s the case, then waiting on them to fix this mess from the top-down might be hopeless. Change needs to come from us, in spite of the leadership.
I think the quotes from Sam’s blog are very interesting and are pretty strong evidence for the view that Sam’s thinking and actions were directly influenced by some EA ideas.
I think the thinking around EA leadership is way too premature and presumptive. There are many years (like a decade?) of EA leadership generally being actually good people and not liars. There are also explicit calls in “official” EA sources that specifically say that the ends do not justify the means in practice, honesty and integrity are important EA values, and pluralism and moral humility are important (which leads to not doing things that would transgress other reasonable moral views).
Most of the relevant documentation is linked in Will’s post.
Edit: After reading the full blog post, the quote is actually Sam presenting the argument that one can calculate which cause is highest priority, the rest be damned.
He goes on to say in the very next paragraph:
This line of thinking is implicitly assuming that the impacts of causes add together rather than multiply, and I think that’s probably not a very good model.
He concludes the post by stating that the multiplicative model, which he thinks is more likely, indicates that both reducing x-risk and improving the future are important.
None of this proves anything. But it’s significantly changed my prior, and I now think it’s likely that the EA movement should heavily invest in multiple causes, not just one.
There’s another post on that same page where he denotes his donations for 2016 and they include donations to x-risk and meta EA orgs, as well as donations to global health and animal welfare orgs.
So nevermind, I don’t think those blog posts are positive evidence for Sam being influenced by EA ideas to think that present people don’t matter or that fraud is justified.
Also note Sam’s own blog
Interesting, thanks. This quote from SBF’s blog is particularly revealing:
Here SBF seems to be going full throttle on his utilitarianism and EV reasoning. It’s worth noting that many prominent leaders in EA also argue for this sort of thing in their academic papers (their public facing work is usually more tame).
For example, here’s a quote from Nick Bostrom (head huncho at the Future of Humanity Institute). He writes:
That sentence is in the third paragraph.
Then you have Will MacAskill and Hilary Greaves saying stuff like:
This seems very different from Will’s recent tweets, where he denied that the ends justified the means (because, surely, if 100 dollars could save a trillion lives, then we’d be justified in stealing 100 dollars?)
Anyway. It seems like SBF took these arguments to heart. And here we are.
Note that from a utilitarian point of view, none of this really matters much. Here’s another quote from Nick Bostrom (section 2, first paragraph):
So if all wars and pandemics in human history are “mere ripples” from a utilitarian standpoint, then what does this FTX scandal amount to?
Probably not much. It is very bad, to be sure, but only because it is very bad PR. The fact that SBF committed massive financial fraud is not, in itself, of any issue. So the people immediately affected by this are mere rounding errors on spreadsheets, from a utilitarian standpoint. So the expressions of remorse currently being given by EA leaders… are those real?
If these leaders take utilitarianism seriously, then probably not.
And when the leaders in EA claim to care, are they being honest? Is the apology tour genuine, or just an act?
To answer this, we need to think like a utilitarian. Why would a utilitarian care about a mere ripple? That makes no sense. But why would a utilitarian pretend to care about a mere ripple? Well, for good PR, of course. So we cannot take anything that any EA thought-leader says. These people have not earned our trust.
And on that note: if the EA thought-leaders are lying to us, then this has serious implications for the movement. Because our goal here is to do the most good. And so far it seems like the utilitarianism that has infected the minds of EA elites is preventing us from doing that. Since the utilitarian vision of the good seems not so good after all.
So we need to seriously consider the possibility, then, that the biggest obstacle facing the EA movement is the current EA leadership.
And if that’s the case, then waiting on them to fix this mess from the top-down might be hopeless. Change needs to come from us, in spite of the leadership.
I’m not exactly sure how this could be done, but I know there has been some talk about democratizing the CEA and enacting whistleblower protections. I’m not sure how we should implement this, though.
Suggestions are welcome.
I think the quotes from Sam’s blog are very interesting
and are pretty strong evidence for the view that Sam’s thinking and actions were directly influenced by some EA ideas.I think the thinking around EA leadership is way too premature and presumptive. There are many years (like a decade?) of EA leadership generally being actually good people and not liars. There are also explicit calls in “official” EA sources that specifically say that the ends do not justify the means in practice, honesty and integrity are important EA values, and pluralism and moral humility are important (which leads to not doing things that would transgress other reasonable moral views).
Most of the relevant documentation is linked in Will’s post.
Edit: After reading the full blog post, the quote is actually Sam presenting the argument that one can calculate which cause is highest priority, the rest be damned.
He goes on to say in the very next paragraph:
He concludes the post by stating that the multiplicative model, which he thinks is more likely, indicates that both reducing x-risk and improving the future are important.
There’s another post on that same page where he denotes his donations for 2016 and they include donations to x-risk and meta EA orgs, as well as donations to global health and animal welfare orgs.
So nevermind, I don’t think those blog posts are positive evidence for Sam being influenced by EA ideas to think that present people don’t matter or that fraud is justified.