Predicting what future people value: A terse introduction to Axiological Futurism

Why this is worth researching

Humanity might develop artificial general intelligence (AGI)[1], colonize space, and create astronomical amounts of things in the future (Bostrom 2003; MacAskill 2022; Althaus and Gloor 2016). But what things? How (dis)valuable? And how does this compare with things grabby aliens would eventually create if they colonize our corner of the universe? What does this imply for our work aimed at impacting the long-term future?

While this depends on many factors, a crucial one will likely be the values of our successors.

Here’s a position that might tempt us while considering whether it is worth researching this topic:

Our descendants are unlikely to have values that are both different from ours in a very significant way and predictable. Either they have values similar to ours or they have values we can’t predict. Therefore, trying to predict their values is a waste of time and resources.

While I see how this can seem compelling, I think this is very ill-informed.

First, predicting the values of our successors – what John Danaher (2021) calls axiological futurism – in worlds where these are meaningfully different from ours doesn’t seem intractable at all. Significant progress has already been made in this research area and there seems to be room for much more (see the next section and the Appendix).

Second, a scenario where the values of our descendants don’t significantly differ from ours appears quite unlikely to me.[2] We should watch for things like the End of History illusion, here. Values seem to notably evolve through History, and there is no reason to assume we are so special that we should drop that prior.

Besides being tractable, I believe axiological futurism to be uncommonly important given its instrumentality in answering the crucial questions mentioned earlier. It therefore also seems unwarrantedly neglected as of today.

How to research this

Here are examples of broad questions that could be part of a research agenda on this topic:

  • What are the best predictors of future human values? What can we learn from usual forecasting methods?

  • How have people’s values changed throughout History? Why? What can we learn from this? (see, e.g., MacAskill 2022, Chapter 3; Harris 2019; Hopster 2022)

  • Are there reasons to think we’ll observe less change in the future? Why? Value lock-in? Some form of moral convergence happening soon?

  • Are there reasons to expect more change? Would that be due to the development of AGI, whole brain emulation, space colonization, and/​or accelerated value drift? What if a global catastrophe occurs?

  • More broadly, what impact will future technological progress have on values? (see Hanson 2016 for a forecast example.)

  • Should we expect some values to be selected for? (see, e.g., Christiano 2013; Bostrom 2009, Tomasik 2017)

  • Might a period of “long reflection” take place? If yes, can we get some idea of what could result from it?

  • Does something like coherent extrapolated volition have any chance of being pursued and if so, what could realistically result from it?

  • Are there futures – where humanity has certain values – that are unlikely but worth wagering on?

  • Might our research on this topic affect the values we should expect our successors to have by, e.g., triggering a self-defeating or self-fulfilling prophecy effect? (Danaher 2021, section 2)

  • What do/​will aliens value and what does that tell us about ourselves?

  • What about the values of a potential post-human-extinction civilization on Earth?

John Danaher (2021) gives examples of methodologies that could be used to answer these questions.

Also, my Appendix references examples and other relevant work, including subsequent posts in this sequence.

Acknowledgment

Thanks to Anders Sandberg for pointing me to the work of John Danaher (2021) and for our insightful discussion on this topic. Thanks to Elias Schmied for other recommendations. Thanks also to M. Victoria Calabrese for her stylistic suggestions. My work on this sequence so far has been funded by Existential Risk Alliance.

All assumptions/​claims/​omissions are my own.

Appendix: Relevant work

(This list is not exhaustive.[3] More or less ranked by decreasing order of relevance.)

  1. ^

    Or something roughly as transformative.

  2. ^

    A sudden value lock-in with an AGI developed and deployed in the next years/​decades is probably the most credible possibility. (See Finnveden et al. 2022.)

  3. ^

    This is more because of my limited knowledge than due to an intent to keep this list short, so please send me other potentially relevant resources!