First, predicting the values of our successors – what John Danaher (2021) calls axiological futurism – in worlds where these are meaningfully different from ours doesn’t seem intractable at all. Significant progress has already been made in this research area and there seems to be room for much more (see the next section and the Appendix).
Could you point more specifically to what progress you think has been made? As this research area seems to have only existed since 2021 we can’t have yet made successful predictions about future values so I’m curious what has been achieved.
Yeah so Danaher (2021) coined the term axiological futurism, but research on this topic has existed long before that. For instance, I find those two pieces particularly insightful:
They explore how compassionate values might be selected against because of evolutionary pressures, and be replaced by values more competitive for, e.g., space colonization races. In The Age of Em, Robin Hanson forecasts what would happen if whole brain emulation comes before de novo AGI, and arrives at similar conclusions.
I don’t think we can say they made “successful predictions” and settled the debate, but it seems like they came up with quite important considerations.
I intend to elaborate more on this kind of work in future posts within this sequence. :)
Could you point more specifically to what progress you think has been made? As this research area seems to have only existed since 2021 we can’t have yet made successful predictions about future values so I’m curious what has been achieved.
Yeah so Danaher (2021) coined the term axiological futurism, but research on this topic has existed long before that. For instance, I find those two pieces particularly insightful:
Robin Hanson (1998) Burning the Cosmic Commons: Evolutionary Strategies for Interstellar Colonization
Nick Bostrom (2004) The Future of Human Evolution
They explore how compassionate values might be selected against because of evolutionary pressures, and be replaced by values more competitive for, e.g., space colonization races. In The Age of Em, Robin Hanson forecasts what would happen if whole brain emulation comes before de novo AGI, and arrives at similar conclusions.
I don’t think we can say they made “successful predictions” and settled the debate, but it seems like they came up with quite important considerations.
I intend to elaborate more on this kind of work in future posts within this sequence. :)