Imagine a forecaster that you haven’t previously heard of told you that there’s a high probability of a new novel pandemic (“pigeon flu”) next month, and their technical arguments are too complicated for you to follow.[1]
Suppose you want to figure out how much you want to defer to them, and you dug through to find out the following facts:
a) The forecaster previously made consistently and egregiously bad forecasts about monkeypox, covid-19, Ebola, SARS, and 2009 H1N1.
b) The forecaster made several elementary mistakes in a theoretical paper on Bayesian statistics
c) The forecaster has a really bad record at videogames, like bronze tier at League of Legends.
I claim that the general competency argument technically goes through for a), b), and c). However, for a practical answer on deference, a) is much more damning than b) or especially c), as you might expect domain-specific ability on predicting pandemics to be much stronger evidence for whether the prediction of pigeon flu is reasonable than general competence as revealed by mathematical ability/conscientiousness or videogame ability.
With a quote like
Hardly anyone associated with Future Fund saw the existential risk to… Future Fund, even though they were as close to it as one could possibly be.
I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant.
The natural interpretation to me is that Cowen (and by quoting him, by extension the authors of the post) is trying to say that FF not predicting the FTX fraud and thus “existential risk to FF” is akin to a). That is, a dispositive domain-specific bad forecast that should be indicative of their abilities to predict existential risk more generally. This is akin to how much you should trust someone predicting pigeon flu when they’ve been wrong on past pandemics and pandemic scares.
To me, however, this failure, while significant as evidence of general competency, is more similar to b). It’s embarrassing and evidence of poor competence to make elementary errors in math. Similarly, it’s embarrassing and evidence of poor competence to not successfully consider all the risks to your organization. But using the phrase “existential risk” is just a semantics game tying them together (in the same way that “why would I trust the Bayesian updates in your pigeon flu forecasting when you’ve made elementary math errors in a Bayesian statistics paper” is a bit of a semantics game).
EAs do not to my knowledge claim to be experts on all existential risks, broadly and colloquially defined. Some subset of EAs do claim to be experts on global-scale existential risks like dangerous AI or engineered pandemics, which is a very different proposition.
[1] Or, alternatively, you think their arguments are inside-view correct but you don’t have a good sense of the selection biases involved.
I agree that the focus on competency on existential risk research specifically is misplaced. But I still think the general competency argument goes through. And as I say elsewhere in the thread—tabooing “existential risk” and instead looking at Longtermism, it looks (and is) pretty bad that a flagship org branded as “longtermist” didn’t last a year!
Funnily enough, the “pigeon flu” example may cease to become a hypothetical. Pretty soon, we may need to look at the track record of various agencies and individuals to assess their predictions on H5N1.
Fair enough. The implication is there though.
Imagine a forecaster that you haven’t previously heard of told you that there’s a high probability of a new novel pandemic (“pigeon flu”) next month, and their technical arguments are too complicated for you to follow.[1]
Suppose you want to figure out how much you want to defer to them, and you dug through to find out the following facts:
a) The forecaster previously made consistently and egregiously bad forecasts about monkeypox, covid-19, Ebola, SARS, and 2009 H1N1.
b) The forecaster made several elementary mistakes in a theoretical paper on Bayesian statistics
c) The forecaster has a really bad record at videogames, like bronze tier at League of Legends.
I claim that the general competency argument technically goes through for a), b), and c). However, for a practical answer on deference, a) is much more damning than b) or especially c), as you might expect domain-specific ability on predicting pandemics to be much stronger evidence for whether the prediction of pigeon flu is reasonable than general competence as revealed by mathematical ability/conscientiousness or videogame ability.
With a quote like
The natural interpretation to me is that Cowen (and by quoting him, by extension the authors of the post) is trying to say that FF not predicting the FTX fraud and thus “existential risk to FF” is akin to a). That is, a dispositive domain-specific bad forecast that should be indicative of their abilities to predict existential risk more generally. This is akin to how much you should trust someone predicting pigeon flu when they’ve been wrong on past pandemics and pandemic scares.
To me, however, this failure, while significant as evidence of general competency, is more similar to b). It’s embarrassing and evidence of poor competence to make elementary errors in math. Similarly, it’s embarrassing and evidence of poor competence to not successfully consider all the risks to your organization. But using the phrase “existential risk” is just a semantics game tying them together (in the same way that “why would I trust the Bayesian updates in your pigeon flu forecasting when you’ve made elementary math errors in a Bayesian statistics paper” is a bit of a semantics game).
EAs do not to my knowledge claim to be experts on all existential risks, broadly and colloquially defined. Some subset of EAs do claim to be experts on global-scale existential risks like dangerous AI or engineered pandemics, which is a very different proposition.
[1] Or, alternatively, you think their arguments are inside-view correct but you don’t have a good sense of the selection biases involved.
I agree that the focus on competency on existential risk research specifically is misplaced. But I still think the general competency argument goes through. And as I say elsewhere in the thread—tabooing “existential risk” and instead looking at Longtermism, it looks (and is) pretty bad that a flagship org branded as “longtermist” didn’t last a year!
Funnily enough, the “pigeon flu” example may cease to become a hypothetical. Pretty soon, we may need to look at the track record of various agencies and individuals to assess their predictions on H5N1.