However many elements of the philosophical case for longtermism are independent of contingent facts about what is going to happen with AI in the coming decades.
It could be that both acceptance of longtermism and ability to forecast AI accurately are caused by some shared underlying factor, e.g. ability to reason and think systematically correctly (or incorrectly).
Or, put another way: in general, for any two questions where there is an objectively correct answer, giving correct answers should be pretty correlated, even if the questions themselves are completely unrelated. Forecasting definitely has an objectively correct answer; not sure about longtermism vs. neartermism, but I think it’s plausible that it will one day look settled, or at least come down to mostly epistemic uncertainty.
So I don’t see why views on these topics should be uncorrelated, unless you think philosophical questions about longtermism vs. neartermism are simple questions of opinion or values differences with no epistemic uncertainty left, and that peoples’ answers to them are unlikely to change under reflection.
It could be that both acceptance of longtermism and ability to forecast AI accurately are caused by some shared underlying factor, e.g. ability to reason and think systematically correctly (or incorrectly).
Or, put another way: in general, for any two questions where there is an objectively correct answer, giving correct answers should be pretty correlated, even if the questions themselves are completely unrelated. Forecasting definitely has an objectively correct answer; not sure about longtermism vs. neartermism, but I think it’s plausible that it will one day look settled, or at least come down to mostly epistemic uncertainty.
So I don’t see why views on these topics should be uncorrelated, unless you think philosophical questions about longtermism vs. neartermism are simple questions of opinion or values differences with no epistemic uncertainty left, and that peoples’ answers to them are unlikely to change under reflection.