Good judgment is obviously broader than the narrow “forecasting” Tetlock is studying. But it seems to me that, other than high-level values questions (e.g. average vs aggregate utilitarianism) it all comes down to prediction skill in some sense, as a necessary consequence of consequentialism. If you can think of something that’s part of jood judgment and not either part of core values or of prediction in a broad sense I’d like to hear what specifically it is, because I can’t think of anything.
“Ultimately actions are good or based solely based on their consequences” necessarily implies your chosen actions will be better if you can predict outcomes better (all else being equal of course, especially your degree of adherence to the plan).
All this description of skills that are supposedly separate from forecasting , e.g.”picking the right questions”, “deciding which kinds of forecasting errors are more acceptable than others”, etc. sounds like a failure to rigorously think through what it means to be good at forecasting. Picking the right questions is just Fermi-izing applied at a higher level than the Superforecasters are doing it. “Picking the right kinds of errors” really seems to be about planning for robustness in the face of catastrophe, arguing against this sort of straw man expected value calculation that I don’t think an actually good forecaster would be naive enough to make.
Judgment is more about forecasting the consequences of your own actions/the actions you recommend to others, vs. the counterfactual where you/they don’t take the action, than computing a single probability for an event you’re not influencing. And you will never be able to calibrate it as well as you can calibrate Tetlockian forecasting because the thing you’re really interested in is the marginal change between the choice you made and the best other one you could have made, rather than a yes/no outcome. But it’s still forecasting.
I would be pretty surprised if most of the people from the EALF survey thought that forecasting is “very closely related” to good judgement.
I think I disagree, though that’s just my impression. As one piece of evidence, the article I most drew on is by Open Phil and also treats them as very related: https://www.openphilanthropy.org/blog/efforts-improve-accuracy-our-judgments-and-forecasts
Good judgment is obviously broader than the narrow “forecasting” Tetlock is studying. But it seems to me that, other than high-level values questions (e.g. average vs aggregate utilitarianism) it all comes down to prediction skill in some sense, as a necessary consequence of consequentialism. If you can think of something that’s part of jood judgment and not either part of core values or of prediction in a broad sense I’d like to hear what specifically it is, because I can’t think of anything.
“Ultimately actions are good or based solely based on their consequences” necessarily implies your chosen actions will be better if you can predict outcomes better (all else being equal of course, especially your degree of adherence to the plan).
All this description of skills that are supposedly separate from forecasting , e.g.”picking the right questions”, “deciding which kinds of forecasting errors are more acceptable than others”, etc. sounds like a failure to rigorously think through what it means to be good at forecasting. Picking the right questions is just Fermi-izing applied at a higher level than the Superforecasters are doing it. “Picking the right kinds of errors” really seems to be about planning for robustness in the face of catastrophe, arguing against this sort of straw man expected value calculation that I don’t think an actually good forecaster would be naive enough to make.
Judgment is more about forecasting the consequences of your own actions/the actions you recommend to others, vs. the counterfactual where you/they don’t take the action, than computing a single probability for an event you’re not influencing. And you will never be able to calibrate it as well as you can calibrate Tetlockian forecasting because the thing you’re really interested in is the marginal change between the choice you made and the best other one you could have made, rather than a yes/no outcome. But it’s still forecasting.