[Disclaimer that I haven’t actually read your post yet—sorry! - though I may do so soon :)]
I’d rather not rely on the authority of past performance to gauge whether someone’s arguments are good. I think we should evaluate the arguments directly. If they are, they’ll stand on their own regardless of someone’s prior luck/circumstance/personality.
I agree that we should often/usually evaluate arguments directly. But:
We have nowhere near enough time to properly evaluate all arguments relevant to our decisions. And in some cases, we also lack the relevant capabilities. So in effect, it’s often necessary and/or wise to base certain beliefs mostly on what certain other people seem to believe.
For example, I don’t actually know that much about how climate science works, and my object-level understanding of the arguments for climate change being real, substantial, and anthropogenic are too shallow for me to be confident—on that basis alone—that those conclusions are correct. (I think a clever person could’ve made false claims about climate science sound similarly believable to me, if they’d been motivated to do so and I’d only looked into it to the extent that I have.)
The same is more strongly true for people with less education and intellectual curiosity than me.
But it’s good for us to default to being fairly confident that things most relevant scientists agree are true are indeed true.
The same basic point is even more clearly true when it comes to things like the big bang or the fact that dinosaurs existed and when they did so
We can both evaluate arguments directly and consider people’s track records
We could also evaluate the “meta argument” that “people who have been shown to be decent forecasters (or better forecasters than other people are) on relatively short time horizons will also be at least slightly ok forecasts (or at least slightly better forecasters than other people are) on relatively long time horizons”
Evaluating that argument directly, I think we should land on “This seems more likely to be true than not, though there’s still room for uncertainty”
Another way of making a perhaps similar point is that it very often makes sense to see past outcomes from some person/object/process or whatever as at least a weak indicator of what the future outcomes from that same thing will be
E.g., the more often a car has failed to start up properly in the past, the more often we should expect it to do so in future
E.g., the more a person has done well at a job in the past, the more we should expect them to do well at that job or similar jobs in future
It’s not clear why this would fail to be the case for forecasting
And indeed, there is empirical evidence that it is the case for forecasting
That said, there is the issue that we’re comparing forecasts over short time horizons to forecasts over long time horizons, and that does introduce some more room for doubt, as noted above
What Linch was talking about seems very unlikely to boil down to just “someone’s prior luck/circumstance/personality”.
Actual track records would definitely not be a result of personality except inasmuch as personality is actually relevant to better performance (e.g. via determination to work hard at forecasting).
They’re very likely partly due to luck, but the evidence shows that some forecasters tend to do better over a large enough set of questions that it can’t be just due to luck (I have in mind Tetlock’s work).
[Disclaimer that I haven’t actually read your post yet—sorry! - though I may do so soon :)]
I agree that we should often/usually evaluate arguments directly. But:
We have nowhere near enough time to properly evaluate all arguments relevant to our decisions. And in some cases, we also lack the relevant capabilities. So in effect, it’s often necessary and/or wise to base certain beliefs mostly on what certain other people seem to believe.
For example, I don’t actually know that much about how climate science works, and my object-level understanding of the arguments for climate change being real, substantial, and anthropogenic are too shallow for me to be confident—on that basis alone—that those conclusions are correct. (I think a clever person could’ve made false claims about climate science sound similarly believable to me, if they’d been motivated to do so and I’d only looked into it to the extent that I have.)
The same is more strongly true for people with less education and intellectual curiosity than me.
But it’s good for us to default to being fairly confident that things most relevant scientists agree are true are indeed true.
The same basic point is even more clearly true when it comes to things like the big bang or the fact that dinosaurs existed and when they did so
See also epistemic humility
We can both evaluate arguments directly and consider people’s track records
We could also evaluate the “meta argument” that “people who have been shown to be decent forecasters (or better forecasters than other people are) on relatively short time horizons will also be at least slightly ok forecasts (or at least slightly better forecasters than other people are) on relatively long time horizons”
Evaluating that argument directly, I think we should land on “This seems more likely to be true than not, though there’s still room for uncertainty”
See also How Feasible Is Long-range Forecasting?, and particularly footnote 17
Another way of making a perhaps similar point is that it very often makes sense to see past outcomes from some person/object/process or whatever as at least a weak indicator of what the future outcomes from that same thing will be
E.g., the more often a car has failed to start up properly in the past, the more often we should expect it to do so in future
E.g., the more a person has done well at a job in the past, the more we should expect them to do well at that job or similar jobs in future
It’s not clear why this would fail to be the case for forecasting
And indeed, there is empirical evidence that it is the case for forecasting
That said, there is the issue that we’re comparing forecasts over short time horizons to forecasts over long time horizons, and that does introduce some more room for doubt, as noted above
What Linch was talking about seems very unlikely to boil down to just “someone’s prior luck/circumstance/personality”.
Actual track records would definitely not be a result of personality except inasmuch as personality is actually relevant to better performance (e.g. via determination to work hard at forecasting).
They’re very likely partly due to luck, but the evidence shows that some forecasters tend to do better over a large enough set of questions that it can’t be just due to luck (I have in mind Tetlock’s work).