Forecasting isn’t exactly the same as good judgement, but seems very closely related – it at least requires ‘weighing up complex information and coming to calibrated conclusions’, though it might require other abilities too. On the other side, I take good judgement to include ‘picking the right questions’, which forecasting doesn’t cover.
So I think they’re pretty close.
The other point is that, yes—I think we have some reasonable evidence that calibration and forecasting can be improved (via the things mentioned in the post), but I’m less confident in other ways to improve judgement. I’ve made some edits to the post to make this clearer.
One other way of improving judgement in general that I do mention, though, is to spend time talking to other people who have good judgement.
Good judgment is obviously broader than the narrow “forecasting” Tetlock is studying. But it seems to me that, other than high-level values questions (e.g. average vs aggregate utilitarianism) it all comes down to prediction skill in some sense, as a necessary consequence of consequentialism. If you can think of something that’s part of jood judgment and not either part of core values or of prediction in a broad sense I’d like to hear what specifically it is, because I can’t think of anything.
“Ultimately actions are good or based solely based on their consequences” necessarily implies your chosen actions will be better if you can predict outcomes better (all else being equal of course, especially your degree of adherence to the plan).
All this description of skills that are supposedly separate from forecasting , e.g.”picking the right questions”, “deciding which kinds of forecasting errors are more acceptable than others”, etc. sounds like a failure to rigorously think through what it means to be good at forecasting. Picking the right questions is just Fermi-izing applied at a higher level than the Superforecasters are doing it. “Picking the right kinds of errors” really seems to be about planning for robustness in the face of catastrophe, arguing against this sort of straw man expected value calculation that I don’t think an actually good forecaster would be naive enough to make.
Judgment is more about forecasting the consequences of your own actions/the actions you recommend to others, vs. the counterfactual where you/they don’t take the action, than computing a single probability for an event you’re not influencing. And you will never be able to calibrate it as well as you can calibrate Tetlockian forecasting because the thing you’re really interested in is the marginal change between the choice you made and the best other one you could have made, rather than a yes/no outcome. But it’s still forecasting.
Hi Khorton, that’s true. In the post I say:
So I think they’re pretty close.
The other point is that, yes—I think we have some reasonable evidence that calibration and forecasting can be improved (via the things mentioned in the post), but I’m less confident in other ways to improve judgement. I’ve made some edits to the post to make this clearer.
One other way of improving judgement in general that I do mention, though, is to spend time talking to other people who have good judgement.
I would be pretty surprised if most of the people from the EALF survey thought that forecasting is “very closely related” to good judgement.
I think I disagree, though that’s just my impression. As one piece of evidence, the article I most drew on is by Open Phil and also treats them as very related: https://www.openphilanthropy.org/blog/efforts-improve-accuracy-our-judgments-and-forecasts
Good judgment is obviously broader than the narrow “forecasting” Tetlock is studying. But it seems to me that, other than high-level values questions (e.g. average vs aggregate utilitarianism) it all comes down to prediction skill in some sense, as a necessary consequence of consequentialism. If you can think of something that’s part of jood judgment and not either part of core values or of prediction in a broad sense I’d like to hear what specifically it is, because I can’t think of anything.
“Ultimately actions are good or based solely based on their consequences” necessarily implies your chosen actions will be better if you can predict outcomes better (all else being equal of course, especially your degree of adherence to the plan).
All this description of skills that are supposedly separate from forecasting , e.g.”picking the right questions”, “deciding which kinds of forecasting errors are more acceptable than others”, etc. sounds like a failure to rigorously think through what it means to be good at forecasting. Picking the right questions is just Fermi-izing applied at a higher level than the Superforecasters are doing it. “Picking the right kinds of errors” really seems to be about planning for robustness in the face of catastrophe, arguing against this sort of straw man expected value calculation that I don’t think an actually good forecaster would be naive enough to make.
Judgment is more about forecasting the consequences of your own actions/the actions you recommend to others, vs. the counterfactual where you/they don’t take the action, than computing a single probability for an event you’re not influencing. And you will never be able to calibrate it as well as you can calibrate Tetlockian forecasting because the thing you’re really interested in is the marginal change between the choice you made and the best other one you could have made, rather than a yes/no outcome. But it’s still forecasting.
Oh got it! Sorry I missed it, thanks