Knowing the shape of future (longterm) value appears to be important to decide which interventions would more effectively increase it. For example, if future value is roughly binary, the increase in its value is directly proportional to the decrease in the likelihood/âseverity of the worst outcomes, in which case existential risk reduction seems particularly useful[1]. On the other hand, if value is roughly uniform, focussing on multiple types of trajectory changes would arguably make more sense[2].
So I wonder what is the shape of future value. To illustrate the question, I have plotted in the figure below the probability density function (PDF) of various beta distributions representing the future value as a fraction of its maximum value[3].
For simplicity, I have assumed future value cannot be negative. The mean is 0.5 for all distributions, which is Toby Ordâs guess for the total existential risk given in The Precipice[4], and implies the distribution parameters alpha and beta have the same value[5]. As this tends to 0, the future value becomes more binary.
- ^
Existential risk was originally defined in Bostrom 2002 as:
One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.
- ^
Although trajectory changes encompass existential risk reduction.
- ^
The calculations are in this Colab.
- ^
If forced to guess, Iâd say there is something like a one in two chance that humanity avoids every existential catastrophe and eventually fulfills its potential: achieving something close to the best future open to us.
- ^
According to Wikipedia, the expected value of a beta distribution is âalphaâ/â(âalphaâ + âbetaâ), which equals 0.5 for âalphaâ = âbetaâ.
I think longterm value is quite binary in expectation.
I think a useful starting point is to ask: how many orders of magnitude does value span?
If we use the life of one happy individual today as one unit of goodness, then I think I think maximum value (originating from Earth in the next billion years, which is probably several orders of magnitude low) is around at least 10^50 units of goodness per century.
My forecast on this Metaculus question reflects this: Highest GWP in the next Billion Years:
My current forecast:
How I got 10^50 units of goodness (i.e. happy current people) per century from my forecast
I converted 10^42 trillion USD to 10^50 happy people by saying todayâs economy is composed of about 10^10 happy people, and such a future economy would be about 10^40 times larger than todayâs economy. If happy people today live 100 years, that gives 10^50 units of goodness per century as the optimal future a century from now.
I also of course assumed the relationship between GWP and value of people stays constant, which of course is highly dubious, but thatâs also how I came up with my dubious forecast for future GWP.
Is Value roughly Binary?
Yes, I think. My forecast of maximum GWP in the next billion years (and thus my forecast of maximum value from Earth-originating life in the next billion years) appears to be roughly binary. I have a lot of weight around what I think the maximum value is (~10^42 trillion 2020 USD), and then a lot of weight on <10^15 trillion 2020 USD (i.e. near zero), but much less weigh on the orders of magnitude in between. If you plot this on a linear scale, I think the value looks even more binary. If my x-axis was not actual value, but fraction of whatever the true maximum possible value is, it would look even more binary (since the vast majority of my uncertainty near the top end would go away).
Note that this answer doesnât explain why my maximum GWP forecast has the shape it does, so doesnât actually make the case for the answer much, just reports that itâs what I believe. Rather than explain why my GWO forecast has the shape it does, Iâd invite anyone who doesnât think value is binary to show their corresponding forecast for maximum GWP (that implies value is not binary) and explain why it looks like that. Given that potential value spans so many order of magnitude, I think itâs quite hard to make a plausible seeming forecast in which value is not approximately binary.
Slightly pedantic note but shouldnât the metaculus gwp question be phrased as the world gwp in our lightcone? We canât reach most of the universe so unless Iâm misunderstanding this would become a question of aliens and stuff that is completely unrelated/âout of the control to/âof humans.
Also somewhat confused what money even means, when you have complete control of all the matter in the universe. Is the idea trying to translate our levels of production into what they would be valued at today? Do people today value billions of sentient digital minds? Not saying this isnât useful to think about but just trying to wrap My head around it.
Good questions. Re the first, from memory I believe I required that the economic activity be causually the result of Earthâs economy today, so that rules out alien economys from consideration.
Re the second, I think itâs complicated and the notion of gross product may break down at some point. I donât have some very clear answer for you other than that I could think of no better metric to measure the size of the future economyâs output than GWP. (Iâm not an economist.)
Robin Hansonâs post âThe Limits of Growthâ may be useful for understanding how to compare potential future economies of immense size to todayâs economy. IIRC he makes comparisons using consumers today having some small probability of achieving something that could be had in the very large future economy. (Heâs an economist. In short, Iâd ask the economists for help with interpreting what GWP in far-future contexts over me.)
Thanks for sharing!
In theory, it seems possible to have future value span lots of orders of magnitude while not being binary. For example, one could have a lognormal/âloguniform distribution with median close to 10^15, and 95th percentile around 10^42. Even if one thinks there is a hard upper bound, there is the possibility of selecting a truncated lognormal/âloguniform (I guess not in Metaculus).
That depends on what you mean by âexistential riskâ and âtrajectory changeâ. Consider a value system that says that future value is roughly binary, but that we would end up near the bottom of our maximum potential value if we failed to colonise space. Proponents of that view could find advocacy for space colonisation useful, and some might find it more natural to view that as a kind of trajectory change. Unfortunately it seems that thereâs no complete consensus on how to define these central terms. (For what itâs worth, the linked article on trajectory change seems to define existential risk reduction as a kind of trajectory change.)
FWIW I take issue with that definition, as I just commented in the discussion of that wiki page here.
I would agree existential risk reduction is a type of trajectory change (as I mentioned in this footnote). That being said, depending on the shape of future value, one may want to focus on some particular types of trajectory changes (e.g. x-risk reduction). To clarify, I have added âmultiple types of â before âtrajectory changesâ.
I donât think that change makes much difference.
It could be better to be more specificâe.g. to talk about value changes, human extinction, civilisational collapse, etc. Your framing may make it appear as if a binary distribution entails that, e.g. value change interventions have a low impact, and I donât think thatâs the case.
In my view, we should not assume that value change interventions are not an effective way of reducing existential risk, so they may still be worth pursuing if future value is binary.
I have two thoughts here.
First, Iâm not sure I like Bostromâs definition of x-risk. It seems to dismiss the notion of aliens. You could imagine a scenario with a ton of independently popping up alien civilizations being very uniform, regardless of what we do. Second, I think the binaryness of our universe is going to be dependent on the AI we make and/âor our expansion philosophy.
AI 1: Flies around the universe dropping single celled organisms on every livable planet
AI 2: Flies around the universe setting up colonies that suck up all the energy in the area and converting it into simulations/âdigital people.
if AI 2 expands through the universe then the valence of sentience in our lightcone would seemingly be much more correlated than if AI 1 expands. So AI 1 scenario would look more
binaryuniform and AI 2 scenario would look moreuniformbinary.