I’m sympathetic to the view that calibration on questions with larger bodies of obviously relevant evidence aren’t transferable to predictions on more speculative questions. Ultimately I believe that the amount of skill transfer is an open empirical question, though I think the absence of strong theorizing about the relevant mechanisms involved heavily counts against deferring to (e.g.) Metaculus predictions about AI timelines.
A potential note of disagreement on your final sentence. While I think focusing on calibration can Goodhart us away from some of the most important sources of epistemic insight, there are “predictions” (broadly construed) that I think we ought to weigh more highly than “domain-relevant specific accomplishments and skills”.
E.g., if you’re sympathetic to EA’s current focus on AI, then I think it’s sensible to think “oh, maybe Yudkowsky was onto something”, and upweight the degree to which you should engage in detail with his worldview, and potentially defer to the extent that you don’t possess a theory which jointly explains both his foresight and the errors you currently think he’s making.
My objection to ‘Bayesian Mindset’ and the use of subjective probabilities to communicate uncertainty is (in part) due to the picture imposed by the probabilistic mode of thinking, which is something like “you have a clear set of well-identified hypotheses, and the primary epistemic task is to become calibrated on such questions.” This leads me to suspect that EAs are undervaluing the ‘novel hypotheses generation’ component of predictions, though there is still a lot of value to be had from (novel) predictions.
Thanks :)
I’m sympathetic to the view that calibration on questions with larger bodies of obviously relevant evidence aren’t transferable to predictions on more speculative questions. Ultimately I believe that the amount of skill transfer is an open empirical question, though I think the absence of strong theorizing about the relevant mechanisms involved heavily counts against deferring to (e.g.) Metaculus predictions about AI timelines.
A potential note of disagreement on your final sentence. While I think focusing on calibration can Goodhart us away from some of the most important sources of epistemic insight, there are “predictions” (broadly construed) that I think we ought to weigh more highly than “domain-relevant specific accomplishments and skills”.
E.g., if you’re sympathetic to EA’s current focus on AI, then I think it’s sensible to think “oh, maybe Yudkowsky was onto something”, and upweight the degree to which you should engage in detail with his worldview, and potentially defer to the extent that you don’t possess a theory which jointly explains both his foresight and the errors you currently think he’s making.
My objection to ‘Bayesian Mindset’ and the use of subjective probabilities to communicate uncertainty is (in part) due to the picture imposed by the probabilistic mode of thinking, which is something like “you have a clear set of well-identified hypotheses, and the primary epistemic task is to become calibrated on such questions.” This leads me to suspect that EAs are undervaluing the ‘novel hypotheses generation’ component of predictions, though there is still a lot of value to be had from (novel) predictions.