What do we want the world to look like in 10 years?

We have a lot of uncertainty over the world we’ll see in a decade. There are many different dimensions it might vary on. Some of these are likely much more important than others for ensuring a good future. Which ones?

I’m keen to see more discussion of this in longtermist EA circles. I think we have a lot of analysis of very long-term outcomes we’re aiming for (“ensure AI is aligned”; “embark on a long reflection”), and a lot of discussion of immediate plans we’re considering, and relatively little of what’s good on this intermediate timescale. But I think it’s really important to understand for informing more immediate plans. It isn’t just enough to identify a handful of top priority variables (like “quality-adjusted amount of AI alignment research”), because comparative advantage can vary between people, and sometimes there are high-leverage opportunities available for achieving various ends, and it’s helpful to understand how good those are.

I’ve been getting mileage out of this prompt as an exercise for groups over the last year. [1] I think people don’t need to have a full strategic understanding to fruitfully engage with the question, but that answers aren’t simple even for people who’ve thought about it a lot. Discussion of detailed cases often seems productive (at least I find it productive to think about individual cases, and I’ve liked conversations I’ve observed others having).

Examples of desirable states[2]

  • Differential technological development is a major principle used in the allocation of public research spending

  • We have ways of monitoring and responding to early-stage potential pandemic pathogens that make civilization-ending pandemics almost impossible

  • There is a variety of deeply inspiring art, and a sense of hope for the future among society broadly

  • There is a textbook on the AI alignment problem which crisply sets out the problem in unambiguous technical terms, and is an easy on-ramp for strong technical researchers, while capturing the heart of what’s important and difficult about it

  • Society has found healthier models of relating to social media, in which it is less addictive and doesn’t amplify crazy but clickbait-y views

  • There are more robust fact-checking institutions, and better language for discussing the provenance of beliefs in use among cultural and intellectual elites

  • We have better models for avoiding rent-seeking behaviour

  • Tensions between great powers are low

These examples would be even better if they were more concrete/​precise (such that it would be unambiguous on waking up in 10 years whether they had been achieved or not), but often the the slightly fuzzy forms will be more achievable as a starting point.

This is a short list of examples; in practice when I’ve run this exercise for long enough people have had hundreds of ideas. (Of course some of the ideas are much better than others.)

Consider spending time on this question

I wanted to share the prompt as an invitation to others to spend time on, either by themselves or as a prompt for a group exercise. I’ve liked coming back to this multiple times, and I expect I’ll continue doing that.

This question is close to cause prioritization. I think of it as complementary rather than a replacement. Reasons to include this in the mix of things to think about:

  • The cause prioritization frame nudges towards identifying a single best thing and stopping, but I think it’s often helpful in practice to have thoughts on a suite of different things

    • e.g. for noticing particularly high-leverage opportunities

  • Immediate plans for making progress on large causes must factor through effects in the intermediate future; it can be helpful to look directly at what we’re aiming for on those timescales

  • It naturally encourages going concrete

    • One can do this in cause prioritization

      • e.g. replace the cause of “align AI” with the sub-cause of “ensure AI safety is taken seriously as an issue among researchers at major AI labs”

    • In practice I think causes are often left relatively broad and abstract rather than having lots of arguments about the relative priority of sub-causes

  • Concreteness of targets can help to generate ideas for how to achieve those targets

Meta: I suggest using comments on this post to discuss the meta-level question of whether this is a helpful question, how to run exercises on it, etc. Object-level discussion of which particular things make good goals on this timescale could be discussed in separate posts — e.g. perhaps we might have posts which consider particular 10-year goals, and then analyse how good they would be to achieve and what it would take to achieve them.

  1. ^

    Originally in a short series of workshops I ran with Becca Kagan and Damon Binder. We had some more complex writeups that maybe we’ll get to publishing some day, but after noticing that I’d become stuck on how to finish polishing those I thought I should share this simple central component.

  2. ^

    These examples are excerpted from a longer list of answers to a brainstorm activity at a recent workshop I was involved in running.