If I condition on your report being wrong in an important way (either in its numerical predictions, or via conceptual flaws) and think about how we might figure that out today, it seems like two salient possibilities are inside-view arguments and outside-view arguments.
The former are things like “this explicit assumption in your model is wrong”. E.g. I count my concern about the infeasibility of building AGI using algorithms available in 2020 as an inside-view argument.
The latter are arguments that, based on the general difficulty of forecasting the future, there’s probably some upcoming paradigm shift or crucial consideration which will have a big effect on your conclusions (even if nobody currently knows what it will be).
Are you more worried about the inside-view arguments of current ML researchers, or outside-view arguments?
I generally spend most of my energy looking for inside-view considerations that might be wrong, because they are more likely to suggest a particular directional update (although I’m not focused only on inside view arguments specifically from ML researchers, and place a lot of weight on inside view arguments from generalists too).
It’s often hard to incorporate the most outside-view considerations into bottom line estimates, because it’s not clear what their implication should be. For example, the outside-view argument “it’s difficult to forecast the future and you should be very uncertain” may imply spreading probability out more widely, but that would involve assigning higher probabilities to TAI very soon, which is in tension with another outside view argument along the lines of “Predicting something extraordinary will happen very soon has a bad track record.”
Shouldn’t a combination of those two heuristics lead to spreading out the probability but with somewhat more probability mass on the longer-term rather than the shorter term?
That’s fair, and I do try to think about this sort of thing when choosing e.g. how wide to make my probability distributions and where to center them; I often make them wider than feels reasonable to me. I didn’t mean to imply that I explicitly avoid incorporating such outside view considerations, just that returns to further thinking about them are often lower by their nature (since they’re often about unkown-unkowns).
True. My main concern here is the lamppost issue (looking under the lamppost because that’s where the light is). If the unknown unknowns affect the probability distribution, then personally I’d prefer to incorporate that or at least explicitly acknowledge it. Not a critique—I think you do acknowledge it—but just a comment.
An extension of Daniel’s bonus question:
If I condition on your report being wrong in an important way (either in its numerical predictions, or via conceptual flaws) and think about how we might figure that out today, it seems like two salient possibilities are inside-view arguments and outside-view arguments.
The former are things like “this explicit assumption in your model is wrong”. E.g. I count my concern about the infeasibility of building AGI using algorithms available in 2020 as an inside-view argument.
The latter are arguments that, based on the general difficulty of forecasting the future, there’s probably some upcoming paradigm shift or crucial consideration which will have a big effect on your conclusions (even if nobody currently knows what it will be).
Are you more worried about the inside-view arguments of current ML researchers, or outside-view arguments?
I generally spend most of my energy looking for inside-view considerations that might be wrong, because they are more likely to suggest a particular directional update (although I’m not focused only on inside view arguments specifically from ML researchers, and place a lot of weight on inside view arguments from generalists too).
It’s often hard to incorporate the most outside-view considerations into bottom line estimates, because it’s not clear what their implication should be. For example, the outside-view argument “it’s difficult to forecast the future and you should be very uncertain” may imply spreading probability out more widely, but that would involve assigning higher probabilities to TAI very soon, which is in tension with another outside view argument along the lines of “Predicting something extraordinary will happen very soon has a bad track record.”
Shouldn’t a combination of those two heuristics lead to spreading out the probability but with somewhat more probability mass on the longer-term rather than the shorter term?
That’s fair, and I do try to think about this sort of thing when choosing e.g. how wide to make my probability distributions and where to center them; I often make them wider than feels reasonable to me. I didn’t mean to imply that I explicitly avoid incorporating such outside view considerations, just that returns to further thinking about them are often lower by their nature (since they’re often about unkown-unkowns).
True. My main concern here is the lamppost issue (looking under the lamppost because that’s where the light is). If the unknown unknowns affect the probability distribution, then personally I’d prefer to incorporate that or at least explicitly acknowledge it. Not a critique—I think you do acknowledge it—but just a comment.