I think we have a lot of analysis of very long-term outcomes we’re aiming for (“ensure AI is aligned”; “embark on a long reflection”), and a lot of discussion of immediate plans we’re considering, and relatively little of what’s good on this intermediate timescale. But I think it’s really important to understand for informing more immediate plans.
Predictability does not necessarily fall off with temporal distance. It may be highly unpredictable where a traveler will be one hour after the start of her journey, yet predictable that after five hours she will be at her destination. The very long-term future of humanity may be relatively easy to predict, being a matter amenable to study by the natural sciences, particularly cosmology (physical eschatology).
It’s possible that it in some ways is easier to predict immediate and longer-term outcomes, and that that’s part of the reason we have more analysis regarding them than about intermediate scenarios.
However, I still agree that we should do more analysis of those intermediate scenarios.
This reminded me of a passage from Bostrom’s The Future of Humanity:
It’s possible that it in some ways is easier to predict immediate and longer-term outcomes, and that that’s part of the reason we have more analysis regarding them than about intermediate scenarios.
However, I still agree that we should do more analysis of those intermediate scenarios.