Urgency vs. Patience—a Toy Model

I’m pretty confused here, so any comments and feedback are much appreciated, including criticism.

Toy Model

Let be the value of the longterm future. Let be the probability that our descendants safely reach technological maturity. Let be the expected quality of the longterm future, given that we safely reach technological maturity. Then the value of the longterm future is:

This ignores all the value in the longterm future that occurs when our descendants don’t safely reach technological maturity.

Assume that we can choose between doing some urgent longtermist work, say existential risk reduction - , or some patient longtermist work, let’s call this global priorities research - . Assume that the existential risk reduction work increases the probability that our descendants safely reach technological maturity, but has no other effect on the quality of the future. Assume that the global priorities research increases the quality of the longterm future conditional on it occurring, but has no effect on existential risk.

Consider some small change in either existential risk reduction work or global priorities research. You can imagine this as $10 trillion, or ‘what the EA community focuses on for the next 50 years’, or something like that. Then for some small finite change in risk reduction, , or in global priorities research, , the change in the value of the longterm future will be:

Dropping the subscripts and dividing the first equation by the other:

Rewriting in more intuitive terms:

Critiquing the Model

I’ve made the assumption that x-risk reduction work doesn’t otherwise affect the quality of the future, and patient longtermist work doesn’t affect the probability of existential risk. Obviously, this isn’t true. I’m not sure how much this reduces the value of the mode. If one type of work was much more valuable than the other, I could see this assumption being unproblematic. Eg. if GPR was 10x as cost effective as XRR, then the value of XRR-focussed work might mainly be in the quality improvements, not the probability improvements.

I’ve made the assumption that we can ignore all value other than worlds where we safely reach technological maturity. This seems pretty intuitive to me, given the likely quality, size, and duration of a technologically mature society, and my ethical views.

Putting some numbers in

Let’s put some numbers in. Toby Ord thinks that with a big effort, humanity can reduce the probability of existential risk this century from to 16. That would make the fractional increase in probability of survival (it goes from to ). Assume for simplicity that x-risk after this century is zero.

For GPR to be cost effective with XRR given these numbers (so the above equation is ), the fractional increase in the value of the future for a comparable amount of work would have to be .

Though Toby’s numbers are really quite favourable to XRR, so putting in your own seems good.

Eg. If you think X-risk is , and we could reduce it to with some amount of effort, then the fractional increase in probability of survival is about (it goes from to ). So for GPR to be cost competitive, we’d have to be able to increase the value of the future by with a similar amount of work that the XRR would have taken.

Implications

Would it take a similar amount of effort to reduce the probability of existential risk this century from to 16 and to increase the fractional value of the future conditional on it occuring by ? My intuition is that the latter is actually much harder than the former. Remember, you’ve got to make the whole future better for all time. What do you think?

Some things going into this are:

  • I think it’s pretty likely () that there will be highly transformative events over the next two centuries. It seems really hard to make detailed plans with steps that happen after these highly transformative events.

  • I’m not sure if research about how the world works now actually helps much for people understanding how the world works after these highly transformative events. If we’re all digital minds, or in space, or highly genetically modified then understanding how today’s poverty, ecosystems, or governments worked might not be very helpful.

  • The minds doing research after the transition might be much more powerful than current researchers. A lower bound seems like 200+IQ humans (and lots more of them than are researchers now), a reasonable expectation seems like a group of superhuman narrow AIs, an upper bound seems like a superintelligent general AI. I think these could do much better research, much faster than current humans working in our current institutions. Of course, building the field means these future researchers have more to work with when they get started. But I guess this is negligible compared to increasing the probability that these future researchers exist, given how much faster they would be.

Having said that, I don’t have a great understanding of the route to value of longtermist research there is that doesn’t contribute to reducing or understanding existential risk (and I think it probably valuable for epistemic modesty reasons).

I should also say that lots of actual ‘global priorities research’ does a lot to understand and reduce x-risk, and could understood as XRR work. I wonder how useful a concept ‘global priorities research is’, and whether it’s too broad.

Questions

  • What’s the best way to conceptualise the value of non-XRR longtermist work? Is it ‘make the future go better for the rest of time’? Does it rely on a lock-in event, like transformative technologies, to make the benefits permanent?

  • What numbers do you think are appropriate to put into this model? If a given unit of XRR work increases the probability of survival by , how much value could it have created via trajectory change? Any vague/​half-baked considerations here are appreciated.

  • Do you think this model is accurate enough to be useful?

  • Do you think that the spillover of XRR on increasing the quality of the future and of GPR on increasing the probability of the future can be neglected?