Yes, but it is hard, and they don’t work well. They can, however, be done at least slightly better.
Good Judgement was asked to forecast the risk of a nuclear war in the next year—which helps somewhat with the time frame question. Unfortunately, the brier score incentives are still really weak.
Ozzie Gooen and others have talked a lot about how to make forecasting better. Some of the ideas that he has suggested relate to how to forecast longer term questions. I can’t find a link to a public document, but here’s one example (which may have been someone else’s suggestion):
You ask people to forecast what probability people will assign in 5 years to the question “will there be a nuclear war by 2100?” (You might also ask whether there will be a nuclear war in the next 5 years, of course.) By using this trick, you can have the question (s) resolve in 5 years, and have an approximate answer based on iterated expectation. But extending this, you can also have them predict what probability people will assign in 5 years to the probability they will assign in another 5 years to the question “will there be a nuclear war by 2100″ - and by chaining predictions like this, you can transform very long term questions into series of shorter term questions.
There is other work in this vein, but to simplify, all of it takes the form “can we do something clever to slightly reduce the issues that exist with the fundamentally hard question of getting short term answers to long term questions.” As far as I can see, there aren’t any simple answers.
Will MacAskill mentioned in this comment that he’d ‘expect that, say, a panel of superforecasters, after being exposed to all the arguments, would be closer to my view than to the median FHI view.’
You’re a good forecaster right? Does it seem right to you that a panel of good forecasters would come to something like Will’s view, rather than the median FHI view?
I’ll speak for the consensus when I say I think there’s not a clear way to decide if this is correct without actually doing it—and the outcome would depend a lot on what level of engagement the superforecasters had with these ideas already. (If I got to pick the 5 superforecasters, even excluding myself, I could guarantee it was either closer to FHI’s viewpoints, or to Will’s.) Even if we picked from a “fair” reference class, if I could have them spend 2 weeks at FHI talking to people there, I think a reasonable proportion would be convinced—though perhaps this is less a function of updating neutrally towards correct ideas as it is the emergence of consensus in groups.
Lastly, I have tremendous respect for Will, but I don’t know that he’s calibrated particularly well to make a prediction like this. (Not that I know he isn’t—I just don’t have any reason to think he’s spent much time working on this skillset.)
Yes, but it is hard, and they don’t work well. They can, however, be done at least slightly better.
Good Judgement was asked to forecast the risk of a nuclear war in the next year—which helps somewhat with the time frame question. Unfortunately, the brier score incentives are still really weak.
Ozzie Gooen and others have talked a lot about how to make forecasting better. Some of the ideas that he has suggested relate to how to forecast longer term questions. I can’t find a link to a public document, but here’s one example (which may have been someone else’s suggestion):
You ask people to forecast what probability people will assign in 5 years to the question “will there be a nuclear war by 2100?” (You might also ask whether there will be a nuclear war in the next 5 years, of course.) By using this trick, you can have the question (s) resolve in 5 years, and have an approximate answer based on iterated expectation. But extending this, you can also have them predict what probability people will assign in 5 years to the probability they will assign in another 5 years to the question “will there be a nuclear war by 2100″ - and by chaining predictions like this, you can transform very long term questions into series of shorter term questions.
There is other work in this vein, but to simplify, all of it takes the form “can we do something clever to slightly reduce the issues that exist with the fundamentally hard question of getting short term answers to long term questions.” As far as I can see, there aren’t any simple answers.
Thanks for the answer.
Will MacAskill mentioned in this comment that he’d ‘expect that, say, a panel of superforecasters, after being exposed to all the arguments, would be closer to my view than to the median FHI view.’
You’re a good forecaster right? Does it seem right to you that a panel of good forecasters would come to something like Will’s view, rather than the median FHI view?
I’ll speak for the consensus when I say I think there’s not a clear way to decide if this is correct without actually doing it—and the outcome would depend a lot on what level of engagement the superforecasters had with these ideas already. (If I got to pick the 5 superforecasters, even excluding myself, I could guarantee it was either closer to FHI’s viewpoints, or to Will’s.) Even if we picked from a “fair” reference class, if I could have them spend 2 weeks at FHI talking to people there, I think a reasonable proportion would be convinced—though perhaps this is less a function of updating neutrally towards correct ideas as it is the emergence of consensus in groups.
Lastly, I have tremendous respect for Will, but I don’t know that he’s calibrated particularly well to make a prediction like this. (Not that I know he isn’t—I just don’t have any reason to think he’s spent much time working on this skillset.)