Hi Devon, FWIW I agree with John Halstead and Michael PJ re John’s point 1.
If you’re open to considering this question further, you may be interested in knowing my reasoning (note that I arrived at this opinion independently of John and Michael), which I share below.
Last November I commented on Tyler Cowen’s post to explain why I disagreed with his point:
I don’t find Tyler’s point very persuasive: Despite the fact that the common sense interpretation of the phrase “existential risk” makes it applicable to the sudden downfall of FTX, in actuality I think forecasting existential risks (e.g. the probability of AI takeover this century) is a very different kind of forecasting question than forecasting whether FTX would suddenly collapse, so performance at one doesn’t necessarily tell us much about performance on the other.
Additionally, and more importantly, the failure to anticipate the collapse of FTX seems to not so much be an example of making a bad forecast, but an example of failure to even consider the hypothesis. If an EA researcher had made it their job to try to forecast the probability that FTX collapses and assigned a very low probability to it after much effort, that probably would have been a bad forecast. But that’s not what happened; in reality EAs just failed to even consider that forecasting question. EAs *have* very seriously considered forecasting questions on x-risk though.
So the better critique of EAs in the spirit of Tyler’s would not be to criticize EA’s existential risk forecasts, but rather to suggest that there may be an existential risk that destroys humanity’s potential that isn’t even on our radar (similar to how the sudden end of FTX wasn’t on our radar). Others have certainly talked about this possibility before though, so that wouldn’t be a new critique. E.g. Toby Ord in The Precipice put “Unforeseen anthropogenic risks” in the next century at ~1 in 30. (Source: https://forum.effectivealtruism.org/posts/Z5KZ2cui8WDjyF6gJ/some-thoughts-on-toby-ord-s-existential-risk-estimates). Does Tyler think ~1 in 30 this century is too low? Or that people haven’t spent enough effort thinking about these unknown existential risks?
You made a further point, Devon, that I want to respond to as well:
There is a certain hubris in claiming you are going to “build a flourishing future” and “support ambitious projects to improve humanity’s long-term prospects” (as the FFF did on its website) only to not exist 6 months later and for reasons of fraud to boot.
I agree with you here. However, I think the hubris was SBF’s hubris, not EAs’ or longtermists-in-general’s hubris.
I’d even go further to say that it wasn’t the Future Fund team’s hubris.
As John commented below, “EAs did a bad job on the governance and management of risks involved in working with SBF and FTX, which is very obvious and everyone already agrees.”
But that’s a critique of the Future Fund’s (and others’) ability to think of all the right top priorities for their small team in their first 6 months (or however long it was), not a sign that the Future Fund had hubris.
Note, however, that I don’t even consider the Future Fund team’s failure to think of this to be a very big critique of them. Why? Because anyone (in the EA community or otherwise) could have entered in The Future Fund’s Project Ideas Competition and suggested the project of investigating the integrity of SBF and his businesses, and the risk that they may suddenly collapse, to ensure the stability of the funding source for the benefit of future Future Fund projects, and to protect EA’s and longtermists’ reputation from risks arising from associating with SBF should SBF become involved in a scandal. (Even Tyler Cowen could have done so and won some easy money.) But no one did (as far as I’m aware). So given that, I conclude that it was a hard risk to spot so early on, and consequently I don’t fault the Future Fund team all that much for failing to spot this in their first 6 months.
There is a lesson to be learned from peoples’ failure to spot the risk, but that lesson is not that longtermists lack the ability to forecast existential risks well, or even that they lack the ability to build a flourishing future.
Hi Devon, FWIW I agree with John Halstead and Michael PJ re John’s point 1.
If you’re open to considering this question further, you may be interested in knowing my reasoning (note that I arrived at this opinion independently of John and Michael), which I share below.
Last November I commented on Tyler Cowen’s post to explain why I disagreed with his point:
You made a further point, Devon, that I want to respond to as well:
I agree with you here. However, I think the hubris was SBF’s hubris, not EAs’ or longtermists-in-general’s hubris.
I’d even go further to say that it wasn’t the Future Fund team’s hubris.
As John commented below, “EAs did a bad job on the governance and management of risks involved in working with SBF and FTX, which is very obvious and everyone already agrees.”
But that’s a critique of the Future Fund’s (and others’) ability to think of all the right top priorities for their small team in their first 6 months (or however long it was), not a sign that the Future Fund had hubris.
Note, however, that I don’t even consider the Future Fund team’s failure to think of this to be a very big critique of them. Why? Because anyone (in the EA community or otherwise) could have entered in The Future Fund’s Project Ideas Competition and suggested the project of investigating the integrity of SBF and his businesses, and the risk that they may suddenly collapse, to ensure the stability of the funding source for the benefit of future Future Fund projects, and to protect EA’s and longtermists’ reputation from risks arising from associating with SBF should SBF become involved in a scandal. (Even Tyler Cowen could have done so and won some easy money.) But no one did (as far as I’m aware). So given that, I conclude that it was a hard risk to spot so early on, and consequently I don’t fault the Future Fund team all that much for failing to spot this in their first 6 months.
There is a lesson to be learned from peoples’ failure to spot the risk, but that lesson is not that longtermists lack the ability to forecast existential risks well, or even that they lack the ability to build a flourishing future.