It’s that absent some device for fixing them in society, the (expected) impact of most societal changes decays over time.
AGI is plausibly such a device. MIRI and Bostrom seem to place reasonable probability on a goal-preserving superintelligence (since goal preservation is a basic AI drive). AGI could preserve values more thoroughly than any human institutions possibly can, since worker robots can be programmed not to have goal drift, and in a singleton scenario without competition, evolutionary pressures won’t select for new values.
So it seems the values that people have in the next centuries could matter a lot for the quality of the future from now until the stars die out, at least in scenarios where human values are loaded to a nontrivial degree into the dominant AGI(s).
AGI is plausibly such a device. MIRI and Bostrom seem to place reasonable probability on a goal-preserving superintelligence (since goal preservation is a basic AI drive). AGI could preserve values more thoroughly than any human institutions possibly can, since worker robots can be programmed not to have goal drift, and in a singleton scenario without competition, evolutionary pressures won’t select for new values.
So it seems the values that people have in the next centuries could matter a lot for the quality of the future from now until the stars die out, at least in scenarios where human values are loaded to a nontrivial degree into the dominant AGI(s).