Good v. Optimal Futures

Summary

In this post I try to outline some intuitions about the variance of value in “good” futures. Discussions of longtermism consider various futures that could be large in value and that justify focusing on longterm effects and on reducing existential risk. It seemed intuitive to me that the difference in the value of some of these futures (many orders of magnitude) should in some way affect our actions, perhaps by focusing us on just making a very good future likely as opposed to making a good future more likely. Below I try to flesh out this argument and my intuitions. I lay out two chains of reasoning and try to justify each step. I then try to consider different comments and possible objections to this reasoning. I conclude that this reasoning may be flawed, and may not change what actions we should take, but could change the reason we should pursue those actions.

Acknowledgements

Thanks to Simon Marshall and Aimee Watts for proof-reading, and to Charlotte Siegmann and Toby Tremlett for reviewing an earlier draft. All mistakes my own

Two arguments

(A) Good v. Optimal Futures

The reasoning I’m considering is of the following structure:

  1. There is a lot of variety in the value of futures we consider “good”. Some futures, which I’ll call “optimal”, are several orders of magnitude better than other seemingly good futures.

  2. It is plausible that the action that has the highest expected value is what increases the chance of an optimal future, not what increases the chance of a good future.

  3. It is possible that the action that best increases the chance of an optimal future is different to the one that best increases the chance of a “good future”

  4. Therefore we should choose the action that best increases the chance of an optimal future, even if it is not the action that best increases the chance of a “good future”

Below I try to justify each of the above claims.

A1 - In The Case for Strong Longtermism, Greaves and MacAskill consider how humanity could continue to exist on Earth for a billion years at a population of 10 billion people per century, giving a possible future of or 100 quadrillion people. In Astronomical Waste Nick Bostrom considers how the Virgo supercluster could support humans. If this were to last for a billion years, there could be humans in the future. Bostrom also considers harnessing the power of all the stars in the supercluster to simulate human minds, possibly simulating human lives a century. If this were to last for a billion years, there could be humans minds in the future. These futures are all huge, but the differences between them are enormous. Under expected utility theory, even a 1 in a trillion chance of a future full of simulated minds is better than a certainty of a future full of biological humans across the galaxy.

A2- We can consider a very basic model of the future. Suppose there is some limited period of existential risk, after which humanity reaches existential security and pursues a “good future”. Let the probability of humanity reaching existential security be p, and suppose there are 3 possible future trajectories after existential security with values , , and probabilities of happening respectively conditioned on humanity reaching existential security. Suppose is the value of humanity continuing to live on Earth for the next 1 billion years with 10 billion people. Suppose is the value of the galaxy filling with biological humans and existing as so for 1 billion years. Let be the value of the galaxy filling with simulated minds and existing as for 1 billion years.

Then using our numbers from above, , and . Suppose the time before reaching existential security is relatively short (e.g. a few centuries) so the expected value of the future is mainly from the time after reaching existential security. Then the expected value of the future is and . Then and . Putting in our numerical values gives and . Then given the size of (and assuming is not very small), we essentially have and . So if you can increase or by some small amount, you should choose to increase the smaller of the two. The question then is which is smaller, and how easy is it to affect or ? Suppose we can expend some amount of effort to work on reducing or , then by reducing , , and by reducing p3 we have. So what matters is whether or is larger. Suppose humanity is likely to reach existential security, say , and that it is unlikely to go down the simulated minds route, say . Then increasing has to be 50 times easier than increasing for it to be worth focusing on .

In general, we find that if you think we’re likely to converge to the optimal future if we reach existential security, then you should try to reduce existential risk, and if you think existential risk is low but we’re unlikely to converge on the optimal future, you should try to make that convergence more likely.

A3 - Methods for increasing include reducing AI risk, reducing biorisk and improving international cooperation. While these approaches may improve the overall chance of an optimal future, they do so by increasing not . might be increased by promoting valuation of simulated minds or the spreading of certain values such as altruism or impartiality. It is possible that spreading suchs values could reduce existential risk also. could also be increased by increasing the chance of a “singleton” forming, such as a dominant AGI or world government, which would then directly pursue the optimal future. Moreover, one could try to create institutions that will persist into the future and be able to alter humanity’s path once it reaches existential security. Overall though, it does seem that increasing is much harder than increasing .

A4- This point then follows from the previous ones. Note that even if it is better to increase , the motivation for this is because it is the best way to make most likely.

(B) Sensitivity to values

I believe a second point is how sensitive these optimal futures are to our values.

  1. Which good future you consider optimal is very sensitive to your values. People with similar but slightly different values will consider different futures optimal.

  2. Given A4, people with very similar but slightly different values will wish to pursue very different actions.

Below I try to justify each point.

B1 - The clearest difference is whether you value simulated minds or not. If you do, then a future of simulated minds is vastly better than anything else for you. But if you don’t, then a future of simulated minds is worthless and a great loss of potential. I believe you could also disagree on issues like the definition of pleasure or of a valuable life. If you think pleasure has a very large maximal limit, then if you have two slightly different definitions of pleasure the universes where they get maximised might diverge considerably.

B2 - Given that according to (A) the most impactful thing we can do would be to increase the likelihood of the optimal future, and this might be from making it more likely once humanity reaches existential security, then what action you should take varies highly with your values. Moreover, increasing the chance of your optimal future decreases the chance of someone else’s optimal future, and you could be actively working against each other.

Objections and Comments

Value Convergence

The most obvious response seems to be that humanity will naturally converge towards the correct value system in the future and so we don’t have to worry about shepherding humanity too much once it reaches existential security. Then in the calculation above, would be very high and therefore working on reducing existential risk would probably be more impactful. This seems possible to me, though I’m quite uncertain how to think of moral progress, and whether it is too naively optimistic to think humanity will naturally converge to the best possible future.

Moral Uncertainty

We should probably be quite uncertain about our values, and reluctant to commit to prioritising one specific future that is optimal for our best guess of values, but is likely not the optimal future. Then trying to reduce existential risk seems pretty robustly good as it increases the chance of an optimal future, even if you don’t know which future that is. Moreover, a community of people with similar but slightly different values who are working to reduce existential risk might almost be a case of moral trade, where instead of actively working against each other on trajectories after existential security, they work on the mutually beneficial existential risk reduction.

S-risks

A consideration I did not include earlier is s-risks, the risk of a very bad future containing large amounts of suffering. If you thought this was possible and that these bad futures could be very large in scale, then reducing x-risk seems like a worse approach, and you might want to focus on making the optimal future more likely after reaching existential security. If you thought the suffering of an s-risk future could dwarf any possible optimal future, then your considerations could be dominated by reducing the chance of this s-risk, which could be from reducing its possibility once humanity reaches existential security.

Non-extinction x-risks

I’ve struggled how to fit non-extinction x-risks into the above model. It seems that a non-optimal future might just be considered a dystopia or a flawed realisation. Then this model is really just comparing between different x-risk interventions and I may not be applying the term existential security correctly. I am hopeful though that the options I’m comparing between are still quite different and worth comparing, even if both fall under a broader definition of “x-risk”.

Conclusion

Overall I’m quite uncertain about how these thoughts and considerations fit together, and it seems quite possible they don’t change our opinions of what we should be doing. But it seems possible that they could, and we should be aware of that possibility. If we are to work on reducing existential risk, it seems important to realise that we’re doing it because it’s the best way to increase the chance of an optimal future, not because we want to increase the chance of a good future, as reducing existential risk could stop being the best way to increase the chance of an optimal future.

I’d really appreciate any comments and feedback.