Hi Devon, FWIW I agree with John Halstead and Michael PJ re Johnâs point 1.
If youâre open to considering this question further, you may be interested in knowing my reasoning (note that I arrived at this opinion independently of John and Michael), which I share below.
Last November I commented on Tyler Cowenâs post to explain why I disagreed with his point:
I donât find Tylerâs point very persuasive: Despite the fact that the common sense interpretation of the phrase âexistential riskâ makes it applicable to the sudden downfall of FTX, in actuality I think forecasting existential risks (e.g. the probability of AI takeover this century) is a very different kind of forecasting question than forecasting whether FTX would suddenly collapse, so performance at one doesnât necessarily tell us much about performance on the other.
Additionally, and more importantly, the failure to anticipate the collapse of FTX seems to not so much be an example of making a bad forecast, but an example of failure to even consider the hypothesis. If an EA researcher had made it their job to try to forecast the probability that FTX collapses and assigned a very low probability to it after much effort, that probably would have been a bad forecast. But thatâs not what happened; in reality EAs just failed to even consider that forecasting question. EAs *have* very seriously considered forecasting questions on x-risk though.
So the better critique of EAs in the spirit of Tylerâs would not be to criticize EAâs existential risk forecasts, but rather to suggest that there may be an existential risk that destroys humanityâs potential that isnât even on our radar (similar to how the sudden end of FTX wasnât on our radar). Others have certainly talked about this possibility before though, so that wouldnât be a new critique. E.g. Toby Ord in The Precipice put âUnforeseen anthropogenic risksâ in the next century at ~1 in 30. (Source: https://ââforum.effectivealtruism.org/ââposts/ââZ5KZ2cui8WDjyF6gJ/ââsome-thoughts-on-toby-ord-s-existential-risk-estimates). Does Tyler think ~1 in 30 this century is too low? Or that people havenât spent enough effort thinking about these unknown existential risks?
You made a further point, Devon, that I want to respond to as well:
There is a certain hubris in claiming you are going to âbuild a flourishing futureâ and âsupport ambitious projects to improve humanityâs long-term prospectsâ (as the FFF did on its website) only to not exist 6 months later and for reasons of fraud to boot.
I agree with you here. However, I think the hubris was SBFâs hubris, not EAsâ or longtermists-in-generalâs hubris.
Iâd even go further to say that it wasnât the Future Fund teamâs hubris.
As John commented below, âEAs did a bad job on the governance and management of risks involved in working with SBF and FTX, which is very obvious and everyone already agrees.â
But thatâs a critique of the Future Fundâs (and othersâ) ability to think of all the right top priorities for their small team in their first 6 months (or however long it was), not a sign that the Future Fund had hubris.
Note, however, that I donât even consider the Future Fund teamâs failure to think of this to be a very big critique of them. Why? Because anyone (in the EA community or otherwise) could have entered in The Future Fundâs Project Ideas Competition and suggested the project of investigating the integrity of SBF and his businesses, and the risk that they may suddenly collapse, to ensure the stability of the funding source for the benefit of future Future Fund projects, and to protect EAâs and longtermistsâ reputation from risks arising from associating with SBF should SBF become involved in a scandal. (Even Tyler Cowen could have done so and won some easy money.) But no one did (as far as Iâm aware). So given that, I conclude that it was a hard risk to spot so early on, and consequently I donât fault the Future Fund team all that much for failing to spot this in their first 6 months.
There is a lesson to be learned from peoplesâ failure to spot the risk, but that lesson is not that longtermists lack the ability to forecast existential risks well, or even that they lack the ability to build a flourishing future.
Hi Devon, FWIW I agree with John Halstead and Michael PJ re Johnâs point 1.
If youâre open to considering this question further, you may be interested in knowing my reasoning (note that I arrived at this opinion independently of John and Michael), which I share below.
Last November I commented on Tyler Cowenâs post to explain why I disagreed with his point:
You made a further point, Devon, that I want to respond to as well:
I agree with you here. However, I think the hubris was SBFâs hubris, not EAsâ or longtermists-in-generalâs hubris.
Iâd even go further to say that it wasnât the Future Fund teamâs hubris.
As John commented below, âEAs did a bad job on the governance and management of risks involved in working with SBF and FTX, which is very obvious and everyone already agrees.â
But thatâs a critique of the Future Fundâs (and othersâ) ability to think of all the right top priorities for their small team in their first 6 months (or however long it was), not a sign that the Future Fund had hubris.
Note, however, that I donât even consider the Future Fund teamâs failure to think of this to be a very big critique of them. Why? Because anyone (in the EA community or otherwise) could have entered in The Future Fundâs Project Ideas Competition and suggested the project of investigating the integrity of SBF and his businesses, and the risk that they may suddenly collapse, to ensure the stability of the funding source for the benefit of future Future Fund projects, and to protect EAâs and longtermistsâ reputation from risks arising from associating with SBF should SBF become involved in a scandal. (Even Tyler Cowen could have done so and won some easy money.) But no one did (as far as Iâm aware). So given that, I conclude that it was a hard risk to spot so early on, and consequently I donât fault the Future Fund team all that much for failing to spot this in their first 6 months.
There is a lesson to be learned from peoplesâ failure to spot the risk, but that lesson is not that longtermists lack the ability to forecast existential risks well, or even that they lack the ability to build a flourishing future.