The difference in that example is that Scholtz is one person so the analogy doesnât hold. EA is a movement comprised of many, many people with different strengths, roles, motives, etc and CERTAINLY there are some people in the movement whose job it was (or at a minimum there are some people who thought long and hard) to mitigate PR/âlongterm risks to the movement.
I picture the criticism more like EA being a pyramid set in the ground, but upside down. At the top of the upside-down pyramid, where things are wide, there are people working to ensure the longterm future goes well on the object level, and perhaps would include Scholtz in your example.
At the bottom of the pyramid things come to a point, and that represents people on the lookout for x-risks to the endeavour itself, which is so small that it turned out to be the reason why things toppled, at least with respect to FTX. And that was indeed a problem. It says nothing about the value of doing x-risk work.
I think that is a charitable interpretation of Cowenâs statement: âHardly anyone associated with Future Fund saw the existential risk toâŠFuture Fund, even though they were as close to it as one could possibly be.â
I think charitably, he isnât saying that any given x-risk researcher should have seen an x-risk to the FTX project coming. Do you?
I think I just donât agree with your charitable reading. The very next paragraph makes it very clear that Cowen means this to suggest that we should think less well of actual existential risk research:
Hardly anyone associated with Future Fund saw the existential risk toâŠFuture Fund, even though they were as close to it as one could possibly be.
I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant.
I think thatâs plain wrong, and Cowen actually is doing the cheap rhetorical trick of âexistential risk in one context equals existential risk in another contextâ. I like Cowen normally, but IMO Scottâs parody is dead on.
âEA didnât spot the risk of FTX and so they need better PR/âmanagement/âwhateverâ would be fine, but I donât think he was saying that.
Yeah I suppose we just disagree then. I think such a big error and hit to the community should downgrade any rational personâs belief in the output of what EA has to offer and also downgrade the trust they are getting it right.
Another side point: Many EAs like Cowen and think he is right most of the time. I think it is suspicious that when Cowen says something about EA that is negative he is labeled stuff like âdaftâ.
Hi Devon, FWIW I agree with John Halstead and Michael PJ re Johnâs point 1.
If youâre open to considering this question further, you may be interested in knowing my reasoning (note that I arrived at this opinion independently of John and Michael), which I share below.
Last November I commented on Tyler Cowenâs post to explain why I disagreed with his point:
I donât find Tylerâs point very persuasive: Despite the fact that the common sense interpretation of the phrase âexistential riskâ makes it applicable to the sudden downfall of FTX, in actuality I think forecasting existential risks (e.g. the probability of AI takeover this century) is a very different kind of forecasting question than forecasting whether FTX would suddenly collapse, so performance at one doesnât necessarily tell us much about performance on the other.
Additionally, and more importantly, the failure to anticipate the collapse of FTX seems to not so much be an example of making a bad forecast, but an example of failure to even consider the hypothesis. If an EA researcher had made it their job to try to forecast the probability that FTX collapses and assigned a very low probability to it after much effort, that probably would have been a bad forecast. But thatâs not what happened; in reality EAs just failed to even consider that forecasting question. EAs *have* very seriously considered forecasting questions on x-risk though.
So the better critique of EAs in the spirit of Tylerâs would not be to criticize EAâs existential risk forecasts, but rather to suggest that there may be an existential risk that destroys humanityâs potential that isnât even on our radar (similar to how the sudden end of FTX wasnât on our radar). Others have certainly talked about this possibility before though, so that wouldnât be a new critique. E.g. Toby Ord in The Precipice put âUnforeseen anthropogenic risksâ in the next century at ~1 in 30. (Source: https://ââforum.effectivealtruism.org/ââposts/ââZ5KZ2cui8WDjyF6gJ/ââsome-thoughts-on-toby-ord-s-existential-risk-estimates). Does Tyler think ~1 in 30 this century is too low? Or that people havenât spent enough effort thinking about these unknown existential risks?
You made a further point, Devon, that I want to respond to as well:
There is a certain hubris in claiming you are going to âbuild a flourishing futureâ and âsupport ambitious projects to improve humanityâs long-term prospectsâ (as the FFF did on its website) only to not exist 6 months later and for reasons of fraud to boot.
I agree with you here. However, I think the hubris was SBFâs hubris, not EAsâ or longtermists-in-generalâs hubris.
Iâd even go further to say that it wasnât the Future Fund teamâs hubris.
As John commented below, âEAs did a bad job on the governance and management of risks involved in working with SBF and FTX, which is very obvious and everyone already agrees.â
But thatâs a critique of the Future Fundâs (and othersâ) ability to think of all the right top priorities for their small team in their first 6 months (or however long it was), not a sign that the Future Fund had hubris.
Note, however, that I donât even consider the Future Fund teamâs failure to think of this to be a very big critique of them. Why? Because anyone (in the EA community or otherwise) could have entered in The Future Fundâs Project Ideas Competition and suggested the project of investigating the integrity of SBF and his businesses, and the risk that they may suddenly collapse, to ensure the stability of the funding source for the benefit of future Future Fund projects, and to protect EAâs and longtermistsâ reputation from risks arising from associating with SBF should SBF become involved in a scandal. (Even Tyler Cowen could have done so and won some easy money.) But no one did (as far as Iâm aware). So given that, I conclude that it was a hard risk to spot so early on, and consequently I donât fault the Future Fund team all that much for failing to spot this in their first 6 months.
There is a lesson to be learned from peoplesâ failure to spot the risk, but that lesson is not that longtermists lack the ability to forecast existential risks well, or even that they lack the ability to build a flourishing future.
I disagreed with the Scott analogy but after thinking it through it made me change my mind. Simply make the following modification:
âLeading UN climatologists are in serious condition after all being wounded in the hurricane Smithfield that further killed as many people as were harmed by the FTX scandal. These climatologists claim that their models can predict the temperature of the Earth from now until 2200 - but they couldnât even predict a hurricane in their own neighborhood. Why should we trust climatologists to protect us from some future catastrophe, when they canât even protect themselves or those nearby in the present?â
Now we are talking about a group rather than one person and also what they missed is much more directly in their domain expertise. I.e. it feels, like the FTX Future fund teamâs domain expertise on EA money, like something they shouldnât be able to miss.
Would you say any rational person should downgrade their opinion of the climatology community and any output they have to offer and downgrade the trust they are getting their 2200 climate change models right?
I shared the modification with an EA thatâlike meâat first agreed with Cowen. Their response was something like âOK, so the climatologists not seeing the existential neartermist threat to themselves appears to still be a serious failure (people they know died!) on their part that needs to be addressedâbut I agree it would be a mistake on my part to downgrade my confidence in their 2100 climate change model because if itâ
However, we conceded that there is a catch: if the climatology community persistently finds their top UN climatologists wounded in hurricanes to the point that they canât work on their models, then rationally we ought to update that their productive output should be lower than expected because they seem to have this neartermist blindspot to their own wellbeing and those nearby. This concession though comes with asterisks though. If we, for sake of argument, assume climatology research benefits greatly from climatologists getting close to hurricanes then we should expect climatologists, as a group, to see more hurricane wounds. In that case we should update, but not as strongly, if climatologists get hurricane wounds.
Ultimately I updated from agree with Cowen to disagree with Cowen after thinking this through. Iâd be curious if and where you disagree with this.
This feels wrong to me? Gell-Mann amnesia is more about general competency whereas I thought Cowen was referring to specficially the category of âexistential riskâ (which I think is a semantics game but others disagree)?
Imagine a forecaster that you havenât previously heard of told you that thereâs a high probability of a new novel pandemic (âpigeon fluâ) next month, and their technical arguments are too complicated for you to follow.[1]
Suppose you want to figure out how much you want to defer to them, and you dug through to find out the following facts:
a) The forecaster previously made consistently and egregiously bad forecasts about monkeypox, covid-19, Ebola, SARS, and 2009 H1N1.
b) The forecaster made several elementary mistakes in a theoretical paper on Bayesian statistics
c) The forecaster has a really bad record at videogames, like bronze tier at League of Legends.
I claim that the general competency argument technically goes through for a), b), and c). However, for a practical answer on deference, a) is much more damning than b) or especially c), as you might expect domain-specific ability on predicting pandemics to be much stronger evidence for whether the prediction of pigeon flu is reasonable than general competence as revealed by mathematical ability/âconscientiousness or videogame ability.
With a quote like
Hardly anyone associated with Future Fund saw the existential risk to⊠Future Fund, even though they were as close to it as one could possibly be.
I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant.
The natural interpretation to me is that Cowen (and by quoting him, by extension the authors of the post) is trying to say that FF not predicting the FTX fraud and thus âexistential risk to FFâ is akin to a). That is, a dispositive domain-specific bad forecast that should be indicative of their abilities to predict existential risk more generally. This is akin to how much you should trust someone predicting pigeon flu when theyâve been wrong on past pandemics and pandemic scares.
To me, however, this failure, while significant as evidence of general competency, is more similar to b). Itâs embarrassing and evidence of poor competence to make elementary errors in math. Similarly, itâs embarrassing and evidence of poor competence to not successfully consider all the risks to your organization. But using the phrase âexistential riskâ is just a semantics game tying them together (in the same way that âwhy would I trust the Bayesian updates in your pigeon flu forecasting when youâve made elementary math errors in a Bayesian statistics paperâ is a bit of a semantics game).
EAs do not to my knowledge claim to be experts on all existential risks, broadly and colloquially defined. Some subset of EAs do claim to be experts on global-scale existential risks like dangerous AI or engineered pandemics, which is a very different proposition.
[1] Or, alternatively, you think their arguments are inside-view correct but you donât have a good sense of the selection biases involved.
I agree that the focus on competency on existential risk research specifically is misplaced. But I still think the general competency argument goes through. And as I say elsewhere in the threadâtabooing âexistential riskâ and instead looking at Longtermism, it looks (and is) pretty bad that a flagship org branded as âlongtermistâ didnât last a year!
Funnily enough, the âpigeon fluâ example may cease to become a hypothetical. Pretty soon, we may need to look at the track record of various agencies and individuals to assess their predictions on H5N1.
Thank you! I remember hearing about Bayesian updates, but rationalizations can wipe those away quickly. From the perspective of Popper, EAs should try âtaking the hypothesis that EA...â and then try proving themselves wrong, instead of using a handful of data-points to reach their preferred, statistically irrelevant conclusion, all-the-while feeling confident.
I donât think the parody works in its current form. The climate scientist claims expertise on climate science x-risk through being a climate-science expert, not through being an expert on x-risk more generally. So him being wrong on other x-risks doesnât update my assessment of his views on climate x-risk that much. In contrast, if the climate scientistâs organization built its headquarters in a flood plain and didnât buy insurance, the resulting flood which destroyed the HQ would reduce my confidence in their ability to assess climate x-risk because they have shown themselves incompetent at least once in at assessing climate risks chose to them.
In contrast, EA (and the FF in particular) asserts/âed expertise in x-risk more generally. For someone claiming this kind of experience, the events that would cause me to downgrade are different than for a subject-matter expert. Missing an x-risk under oneâs nose would count. While I donât think âexistential risk in one context equals existential risk in another context,â I donât think the past performance has no bearing on estimates of future performance either.
I think assessing the extent to which the âmissâ on FTX should cause a reasonable observer to downgrade EAâs x-risk credentials has been made difficult by the silence-on-advise-of-legal-counsel approach. To the extent that the possibility of FTX drying up wasnât even on the radar of top leadership people, that would be a very serious downgrade for me. (Actually, it would be a significant downgrade in general confidence for any similarly-sized movement that lacked awareness that promised billions from a three-year old crypto company had a good chance of not materializing.) A failure to specifically recognize the risk of very shady business practices (even if not Madoff 2.0) would be a significant demerit in light of the well-known history of such things in the crypto space. To the extent that there was clear awareness and the probabilities were just wrong in hindsight, that is only a minor demerit for me.
To perhaps make it clearer: I think EA is trying to be expert in âexistential risks to humanityâ, and that really does have almost no overlap with âexistential risks to individual firms or organizationsâ.
Or to sharpen the parody: if it was a climate-risk org that had got in trouble because it was funded by FTX, would that downgrade your expectation of their ability to assess climate risks?
But on mainstream EA assumptions about x-risk, the failure of the Future Fund materially increased existential risk to humanity. Youâd need to find a similar event that materially changed the risk of catastrophic climate change for the analogy to potentially holdâthe death of a single researcher or the loss of a non-critical funding source for climate-mitigation efforts doesnât work for me.
More generally, I think itâs probably reasonable to downgrade for missing FTX on âgeneral competenceâ and âability to predict and manage riskâ as well. I think both of those attributes are correlated with âability to predict and manage existential risk,â the latter more so than the former. Given that existential-risk expertise is a difficult attribute to measure, itâs reasonable to downgrade when downgrading oneâs assessment of more measureable attributes. Although that effect would also apply to the climate-mitigation movement if it suffered an FTX-level setback event involving insiders, the justification for listening to climate scientists isnât nearly as heavily loaded on âability to predict and manage existential risk.â Itâs primarily loaded on domain-specific expertise in climate science, and missing FTX wouldnât make me think materially less of the relevant people as scientists.
To be clear, Iâm not endorsing the narrative that EA is near-useless on x-risk because it missed FTX. My own assumption is that people recognized a risk that FTX funding wouldnât come through, and that the leaders recognized a risk that SBF was doing shady stuff (cf. the leaked leader chat) although perhaps not a Madoff 2.0. I think those risks were likely underestimated, which leads me to a downgrade but not a massive one.
Scottâs analogy is correct, in that the problem with the criticism is that the thing someone failed to predict was on a different topic. Itâs not reasonable to conclude that a climate scientist is bad at predicting the climate because they are bad at predicting mass shootings. If it were a thousand climate scientists predicting the climate a hundred years from now, and they all died in an earthquake yesterday, itâs not reasonable to conclude that their climate models were wrong because they failed to predict something outside the scope of their models.
The difference in that example is that Scholtz is one person so the analogy doesnât hold. EA is a movement comprised of many, many people with different strengths, roles, motives, etc and CERTAINLY there are some people in the movement whose job it was (or at a minimum there are some people who thought long and hard) to mitigate PR/âlongterm risks to the movement.
I picture the criticism more like EA being a pyramid set in the ground, but upside down. At the top of the upside-down pyramid, where things are wide, there are people working to ensure the longterm future goes well on the object level, and perhaps would include Scholtz in your example.
At the bottom of the pyramid things come to a point, and that represents people on the lookout for x-risks to the endeavour itself, which is so small that it turned out to be the reason why things toppled, at least with respect to FTX. And that was indeed a problem. It says nothing about the value of doing x-risk work.
I think that is a charitable interpretation of Cowenâs statement: âHardly anyone associated with Future Fund saw the existential risk toâŠFuture Fund, even though they were as close to it as one could possibly be.â
I think charitably, he isnât saying that any given x-risk researcher should have seen an x-risk to the FTX project coming. Do you?
I think I just donât agree with your charitable reading. The very next paragraph makes it very clear that Cowen means this to suggest that we should think less well of actual existential risk research:
I think thatâs plain wrong, and Cowen actually is doing the cheap rhetorical trick of âexistential risk in one context equals existential risk in another contextâ. I like Cowen normally, but IMO Scottâs parody is dead on.
âEA didnât spot the risk of FTX and so they need better PR/âmanagement/âwhateverâ would be fine, but I donât think he was saying that.
Yeah I suppose we just disagree then. I think such a big error and hit to the community should downgrade any rational personâs belief in the output of what EA has to offer and also downgrade the trust they are getting it right.
Another side point: Many EAs like Cowen and think he is right most of the time. I think it is suspicious that when Cowen says something about EA that is negative he is labeled stuff like âdaftâ.
Hi Devon, FWIW I agree with John Halstead and Michael PJ re Johnâs point 1.
If youâre open to considering this question further, you may be interested in knowing my reasoning (note that I arrived at this opinion independently of John and Michael), which I share below.
Last November I commented on Tyler Cowenâs post to explain why I disagreed with his point:
You made a further point, Devon, that I want to respond to as well:
I agree with you here. However, I think the hubris was SBFâs hubris, not EAsâ or longtermists-in-generalâs hubris.
Iâd even go further to say that it wasnât the Future Fund teamâs hubris.
As John commented below, âEAs did a bad job on the governance and management of risks involved in working with SBF and FTX, which is very obvious and everyone already agrees.â
But thatâs a critique of the Future Fundâs (and othersâ) ability to think of all the right top priorities for their small team in their first 6 months (or however long it was), not a sign that the Future Fund had hubris.
Note, however, that I donât even consider the Future Fund teamâs failure to think of this to be a very big critique of them. Why? Because anyone (in the EA community or otherwise) could have entered in The Future Fundâs Project Ideas Competition and suggested the project of investigating the integrity of SBF and his businesses, and the risk that they may suddenly collapse, to ensure the stability of the funding source for the benefit of future Future Fund projects, and to protect EAâs and longtermistsâ reputation from risks arising from associating with SBF should SBF become involved in a scandal. (Even Tyler Cowen could have done so and won some easy money.) But no one did (as far as Iâm aware). So given that, I conclude that it was a hard risk to spot so early on, and consequently I donât fault the Future Fund team all that much for failing to spot this in their first 6 months.
There is a lesson to be learned from peoplesâ failure to spot the risk, but that lesson is not that longtermists lack the ability to forecast existential risks well, or even that they lack the ability to build a flourishing future.
I disagreed with the Scott analogy but after thinking it through it made me change my mind. Simply make the following modification:
âLeading UN climatologists are in serious condition after all being wounded in the hurricane Smithfield that further killed as many people as were harmed by the FTX scandal. These climatologists claim that their models can predict the temperature of the Earth from now until 2200 - but they couldnât even predict a hurricane in their own neighborhood. Why should we trust climatologists to protect us from some future catastrophe, when they canât even protect themselves or those nearby in the present?â
Now we are talking about a group rather than one person and also what they missed is much more directly in their domain expertise. I.e. it feels, like the FTX Future fund teamâs domain expertise on EA money, like something they shouldnât be able to miss.
Would you say any rational person should downgrade their opinion of the climatology community and any output they have to offer and downgrade the trust they are getting their 2200 climate change models right?
I shared the modification with an EA thatâlike meâat first agreed with Cowen. Their response was something like âOK, so the climatologists not seeing the existential neartermist threat to themselves appears to still be a serious failure (people they know died!) on their part that needs to be addressedâbut I agree it would be a mistake on my part to downgrade my confidence in their 2100 climate change model because if itâ
However, we conceded that there is a catch: if the climatology community persistently finds their top UN climatologists wounded in hurricanes to the point that they canât work on their models, then rationally we ought to update that their productive output should be lower than expected because they seem to have this neartermist blindspot to their own wellbeing and those nearby. This concession though comes with asterisks though. If we, for sake of argument, assume climatology research benefits greatly from climatologists getting close to hurricanes then we should expect climatologists, as a group, to see more hurricane wounds. In that case we should update, but not as strongly, if climatologists get hurricane wounds.
Ultimately I updated from agree with Cowen to disagree with Cowen after thinking this through. Iâd be curious if and where you disagree with this.
Tbh I took the Gell-Mann amnesia interpretation and just concluded that heâs probably being daft more often in areas I donât know so much about.
This is what Cowen was doing with his original remark.
This feels wrong to me? Gell-Mann amnesia is more about general competency whereas I thought Cowen was referring to specficially the category of âexistential riskâ (which I think is a semantics game but others disagree)?
Cowen is saying that he thinks EA is less generally competent because of not seeing the x-risk to the Future Fund.
Again if this was true he would not specifically phrase it as existential risk (unless maybe he was actively trying to mislead)
Fair enough. The implication is there though.
Imagine a forecaster that you havenât previously heard of told you that thereâs a high probability of a new novel pandemic (âpigeon fluâ) next month, and their technical arguments are too complicated for you to follow.[1]
Suppose you want to figure out how much you want to defer to them, and you dug through to find out the following facts:
a) The forecaster previously made consistently and egregiously bad forecasts about monkeypox, covid-19, Ebola, SARS, and 2009 H1N1.
b) The forecaster made several elementary mistakes in a theoretical paper on Bayesian statistics
c) The forecaster has a really bad record at videogames, like bronze tier at League of Legends.
I claim that the general competency argument technically goes through for a), b), and c). However, for a practical answer on deference, a) is much more damning than b) or especially c), as you might expect domain-specific ability on predicting pandemics to be much stronger evidence for whether the prediction of pigeon flu is reasonable than general competence as revealed by mathematical ability/âconscientiousness or videogame ability.
With a quote like
The natural interpretation to me is that Cowen (and by quoting him, by extension the authors of the post) is trying to say that FF not predicting the FTX fraud and thus âexistential risk to FFâ is akin to a). That is, a dispositive domain-specific bad forecast that should be indicative of their abilities to predict existential risk more generally. This is akin to how much you should trust someone predicting pigeon flu when theyâve been wrong on past pandemics and pandemic scares.
To me, however, this failure, while significant as evidence of general competency, is more similar to b). Itâs embarrassing and evidence of poor competence to make elementary errors in math. Similarly, itâs embarrassing and evidence of poor competence to not successfully consider all the risks to your organization. But using the phrase âexistential riskâ is just a semantics game tying them together (in the same way that âwhy would I trust the Bayesian updates in your pigeon flu forecasting when youâve made elementary math errors in a Bayesian statistics paperâ is a bit of a semantics game).
EAs do not to my knowledge claim to be experts on all existential risks, broadly and colloquially defined. Some subset of EAs do claim to be experts on global-scale existential risks like dangerous AI or engineered pandemics, which is a very different proposition.
[1] Or, alternatively, you think their arguments are inside-view correct but you donât have a good sense of the selection biases involved.
I agree that the focus on competency on existential risk research specifically is misplaced. But I still think the general competency argument goes through. And as I say elsewhere in the threadâtabooing âexistential riskâ and instead looking at Longtermism, it looks (and is) pretty bad that a flagship org branded as âlongtermistâ didnât last a year!
Funnily enough, the âpigeon fluâ example may cease to become a hypothetical. Pretty soon, we may need to look at the track record of various agencies and individuals to assess their predictions on H5N1.
I agree that is the other way out of the puzzle. I wonder whom to even trust if everyone is susceptible to this problem...
Thank you! I remember hearing about Bayesian updates, but rationalizations can wipe those away quickly. From the perspective of Popper, EAs should try âtaking the hypothesis that EA...â and then try proving themselves wrong, instead of using a handful of data-points to reach their preferred, statistically irrelevant conclusion, all-the-while feeling confident.
I donât think the parody works in its current form. The climate scientist claims expertise on climate science x-risk through being a climate-science expert, not through being an expert on x-risk more generally. So him being wrong on other x-risks doesnât update my assessment of his views on climate x-risk that much. In contrast, if the climate scientistâs organization built its headquarters in a flood plain and didnât buy insurance, the resulting flood which destroyed the HQ would reduce my confidence in their ability to assess climate x-risk because they have shown themselves incompetent at least once in at assessing climate risks chose to them.
In contrast, EA (and the FF in particular) asserts/âed expertise in x-risk more generally. For someone claiming this kind of experience, the events that would cause me to downgrade are different than for a subject-matter expert. Missing an x-risk under oneâs nose would count. While I donât think âexistential risk in one context equals existential risk in another context,â I donât think the past performance has no bearing on estimates of future performance either.
I think assessing the extent to which the âmissâ on FTX should cause a reasonable observer to downgrade EAâs x-risk credentials has been made difficult by the silence-on-advise-of-legal-counsel approach. To the extent that the possibility of FTX drying up wasnât even on the radar of top leadership people, that would be a very serious downgrade for me. (Actually, it would be a significant downgrade in general confidence for any similarly-sized movement that lacked awareness that promised billions from a three-year old crypto company had a good chance of not materializing.) A failure to specifically recognize the risk of very shady business practices (even if not Madoff 2.0) would be a significant demerit in light of the well-known history of such things in the crypto space. To the extent that there was clear awareness and the probabilities were just wrong in hindsight, that is only a minor demerit for me.
To perhaps make it clearer: I think EA is trying to be expert in âexistential risks to humanityâ, and that really does have almost no overlap with âexistential risks to individual firms or organizationsâ.
Or to sharpen the parody: if it was a climate-risk org that had got in trouble because it was funded by FTX, would that downgrade your expectation of their ability to assess climate risks?
But on mainstream EA assumptions about x-risk, the failure of the Future Fund materially increased existential risk to humanity. Youâd need to find a similar event that materially changed the risk of catastrophic climate change for the analogy to potentially holdâthe death of a single researcher or the loss of a non-critical funding source for climate-mitigation efforts doesnât work for me.
More generally, I think itâs probably reasonable to downgrade for missing FTX on âgeneral competenceâ and âability to predict and manage riskâ as well. I think both of those attributes are correlated with âability to predict and manage existential risk,â the latter more so than the former. Given that existential-risk expertise is a difficult attribute to measure, itâs reasonable to downgrade when downgrading oneâs assessment of more measureable attributes. Although that effect would also apply to the climate-mitigation movement if it suffered an FTX-level setback event involving insiders, the justification for listening to climate scientists isnât nearly as heavily loaded on âability to predict and manage existential risk.â Itâs primarily loaded on domain-specific expertise in climate science, and missing FTX wouldnât make me think materially less of the relevant people as scientists.
To be clear, Iâm not endorsing the narrative that EA is near-useless on x-risk because it missed FTX. My own assumption is that people recognized a risk that FTX funding wouldnât come through, and that the leaders recognized a risk that SBF was doing shady stuff (cf. the leaked leader chat) although perhaps not a Madoff 2.0. I think those risks were likely underestimated, which leads me to a downgrade but not a massive one.
Alternatively, one could have said something like
This, too, would not have been a good argument.
Scottâs analogy is correct, in that the problem with the criticism is that the thing someone failed to predict was on a different topic. Itâs not reasonable to conclude that a climate scientist is bad at predicting the climate because they are bad at predicting mass shootings. If it were a thousand climate scientists predicting the climate a hundred years from now, and they all died in an earthquake yesterday, itâs not reasonable to conclude that their climate models were wrong because they failed to predict something outside the scope of their models.