my thoughts, ~ in order: (they all are conditional on this described cosmic inflation theory being true)
this seems to imply there’s a constant ‘time’ factor operating over all worlds at once, and that it’s coherent to say that the arisal of a new universe happens at the ‘same time as’ some specific point in {time of a specific universe}.
i don’t study physics, so i guess it could be true (and we can imagine programs which share that structure)! my intuition says it could also be that each universe happens ‘all at once’, relative to the arisal of universes, i.e that there’s a separate meta-time that universe generation happens along. if so, longtermism is preferred again.
possibly, in the actual physics theory, there’s no ‘meta’ or ontologically-fundamental separation between universes, just causal separation (like the ‘unobservable universe’ in standard cosmology). this would probably imply a shared time / lack of ‘meta-time’
(the below thoughts are as if that implication is true)
for any set of universes in which the short-term-preferring tradeoff is made, that set will be worse off in the long-term. so this does, in a sense, trade short-term happiness for a larger amount of long-term suffering; it’s just that larger amount, by the time it happens, is already outweighed by the short-term happiness in a vastly larger amount of future universes (which itself will also soon be outweighed in those same universes, but not before.. and so on).
this opens the door to two possible values: those which care about always-not-yet-infinite-time, and those which care about ‘infinite’ time as if over the entirety of this neverending pattern. for the latter, longtermism is optimal.
if you’re doing anthropic reasoning, i.e without grounding in any particular world, it’s always the case that your instantiations are more common in the ‘last’ generation. but there is no ‘last’ generation, because the pattern is neverending.
my guess is that there’s no ‘correct anthropic solution’ in response to this, and it just depends on what algorithm an agent is running, but i can imagine some which intake this situation and reason over it as an infinite rather than always-increasing-finite set as a result.[1]
there’s a way to almost-maximize both this and the parts of our values which go against this:
first, and immediately, take the action which results in the most well-being (or other value) in the immediate term. (alas, i was not able to figure out how to make myself feel happy in time)
anything done beyond that brief period is so very exponentially small in comparison. (also, the first such brief period was actually way before we even existed, as is most life[2])
given that, you’ve achieved the vast majority of value that will ever exist under those assumptions (or values). you may now spend the rest of time maximizing your other values (in particular, the ‘infinite-time’ framing of value described in 2).
though, i notice another odd way in which “anything done beyond that brief period is so very exponentially smaller in comparison” could be not quite true. if earlier in-universe-time is so much exponentially larger, then we might expect it to contain an exponentially large amount of boltzmann-brain-like-situations[3] occurring at the very start of times. to bite this bullet would suggest one of two things:
always acting as an immediate-termist, so that a correlated action is taken in the earliest boltzmann-situations
or for some values, if extreme suffering is, say, of a different class than lighter suffering, always more important than any amount of light suffering; acting to minimize the soonest nearby instance of that.
or a different strategy that is probably much harder for humans, but maybe possible for ASIs; if, somehow, your choices don’t just acausally correlate to the actions of versions of you in boltzmann context, but also correlate to which boltzmann contexts arise to begin with. (because your actions are determined by the past in general, so what past is coherent depends on what choice you make)
(reminder: these are just thoughts that i had conditional on the physics theory in the post being true, and i don’t have an inside-view belief about whether it is.)
(slightly edited from fn1 of my earlier comment that misunderstood OP):
possible values respond differently to infinite quantities.
for some, which care about quantity, they will always be maxxed out along all dimensions due to infinite quantity. (at least, unless something they (dis)value occurs with exactly 0% frequency, implying a quantity of 0 - which could, i think, be influenced by portional acausal trade in certain logically-possible circumstances. (i.e maybe not the case in ‘actual reality’ if it’s infinite, but possible at least in some mathematically-definable infinite universes; as a trivial case, a set of infinite 1s contains no 0s. more fundamentally, an infinite set of universes can be a finite set occurring infinite times.))
other values might care about portion—that is, portion of / percentage-frequency within the infinite amount of worlds—the thing that determines the indexical probability of an observation in that context—rather than quantity. (e.g., i think my altruism still cares about this, though it’s really tragic that there’s infinite suffering).
note this difference is separate from whether the agent conceptualizes this world as finite-increasing or infinite
(as an aside, this means that us existing now does not resolve the ‘youngness paradox’; that would require we exist at the first moment of the first observer)
i would rather dissolve the youngness paradox by saying that the probability of our existence is still logically guaranteed (i.e. 1), even if it’s exponentially small in comparative frequency (rather than logical probability); and to the extent it could answer the fermi paradox, answer that instead with mutual anthropic capture
not necessarily lone brains, whose actions don’t effect what observations they receive next as they soon dissolve; but maybe, in particular if it’s required for our actions to have a correlated effect, situations where at least some brief period of action is possible before dissolution.
maybe we’d select for the smallest possible context in which there’s a copy of us, where it can take some form of action, such as ‘thinking the happiest thought it can’ rather than something requiring physical movement, on the basis that such smaller configurations will randomly occur exponentially more often; and then take that smallest action ourselves for the correlated effect.
implies the correct action would be to try to have happy thoughts immediately
Hey again quila, really appreciate your incredibly detailed response, although again I am neglecting important things and unfortunately really don’t have any time to write a detailed response, my sincere apologies for this! By the way, really glad you got more clarity from the other post, I also found this very helpful.
Yes, I think there is a constant time factor. It is all one unified, single space-time, as I understand it (although this also isn’t an area of very high expertise for me,) I think that what causally separates the universes is simply that space is expanding so fast that the universe is are separated by incredible amount of space and don’t have any possibility of colliding again until much, much later in the universe’ s time-lines.
Yes, I believe this is correct. I am pretty uncertain about this.
A reason for believing it might make more sense to say that what matters is the proportion of universes that have greater positive versus negative value, is that intuitively it feels like you should have to specify some time at which you are measuring the total amount of positive versus negative value in all universes, something which we actually know how to, in principle, calculate at any given second, and at any given time along the infinite timeline of the multiverse, every younger second always has 10^10^34 more weight than older seconds.
Nonetheless, it is totally plausible that you should calculate the total value of all universes that will ever exist as though from an outside observer perspective that able to observe the infinity of universes in their entirety all at once.
A very, very crucial point is that this argument is only trying to calculate what is best to do in expectation, and even if you have a strong preference for one or other of these theories, you probably don’t have a preference that is stronger than a few orders of magnitude, so in terms of orders of magnitude it actually doesn’t make much of a difference which you think is correct, as long as there is nonzero credence in the first method.
As a side point, I think that’s actually what is worrying/exciting the about this theory as I think about it more, it’s hard to think of anything that could have more orders of magnitude greater possible impact than this does, except of course any theories where you can either generate or fail to generate infinities of value within our universe; this theory does state that you are creating infinite value since this value will last infinitely into the future universes, but if within this universe you create further infinities, then you have infinities of infinities which trump singular or even just really big infinities.
Yes! I have been editing the post and added something somewhat similar before reading this comment, there are lots of weird implications related to this. Nonetheless, it always continues to be true that this theory might dominate many of the others in terms of expected value, so I think it could make sense to just add it as 1% of our portfolio of doing good (since 1% versus 100% would be not even a rounding error of a rounding error in terms of orders of magnitude,) and hence we don’t have to feel bad about ignoring it forever. I don’t know, maybe that silly. Yes, it certainly does seem like it’s a theory which is unusually easy to compromise with!
And that’s a very interesting point about the Boltzman brains, I hadn’t thought of that before I feel like this theory is so profoundly underdeveloped and uninvestigated that there are probably many, many surprising implications or crucial considerations that might be hiding not too far away
Sorry again for not replying in full, I really am neglecting important things that are somewhat urgent (no pun intended). If there is anything really important you think I missed feel free to comment again, I do greatly appreciate your comments, though just a heads up I will probably only reply very briefly or possibly not at all for now.
it’s okay if you don’t reply. my above comment was treating this post as a schelling point to add my thoughts to the historical archive about this idea.
about ‘living in the moment’ in your other comment: if we ignore influencing boltzmann brains/contexts, then applying ‘ultimate neartermism’ now actually looks more like being a longtermist to enable eventual acausal trades with superintelligence* in a younger universe-point. (* with ‘infinite time’ values, so the trade is preferred to them)
A very, very crucial point is that this argument is only trying to calculate what is best to do in expectation, and even if you have a strong preference for one or other of these theories, you probably don’t have a preference that is stronger than a few orders of magnitude, so in terms of orders of magnitude it actually doesn’t make much of a difference which you think is correct, as long as there is nonzero credence in the first method.
i’m not sure if by ‘these theories’ you meant different physics theories, or these different possible ways of valuing a neverending world (given the paragraphs before the quoted one). if you meant physics theories, then i agree that such quantitative differences matter (this is a weak statement as i’m too confused about infinite universes with different rates-of-increasing to have a stronger statement).
if you meant values:
that’s not how value functions have to be. in principle example: there could be a value function which contains both these and normalizes the scores on each to be within −1 to 1 before summing them.
i don’t think it’s the case that the former function, unnormalized, has a greater range than the latter function. intuitively, it would actually be the case that ‘infinite time’ has an infinitely larger range, but i suspect this is actually more of a different kind of paradox and both would regard this universe as infinite.
paradox between ‘reason over whole universe’ and ‘reason over each timestep in universe’. somehow these appear to not be the same here.
i don’t actually know how to define either of them. i can write a non-terminating number-doubling-program, and ig have that same program also track the sum so far, but i don’t know what it actually means to sum an (at least increasing) infinite series.
actually, a silly idea comes to mind: (if we’re allowed to say[1]) some infinite series like [1/2 + 1⁄4 + 1⁄8 + …] sum to a finite number (1 in that case), then we can also represent the universe going backwards with a decreasing infinite series. i.e., [1 + (1 ÷ 10^10^34) + (1 ÷ 10^10^34^2) + …], where the first term represents the size of the end rather than start of the universe. this way, the calculation at least doesn’t get stuck at infinity. this does end up more clearly implying longtermism, while maintaining the same ratio between size of universe at different times.
but it’s also technically wrong, if the universe has a start but no end, rather than an end but no start.
(though in my intuitive[2] math system, these statements are true: [1/inf > 0], [1/inf × inf = 1], [2/inf × inf = 2]. this could resolve this by letting the start of the universe be represented as 1/10^10^34^inf (so that the increasing infinite series starting from here has the same sum as the decreasing infinite series above).
(i’m not a mathematician). i don’t understand how an infinite series can be writable in existing formal languages—it seems like it would require a ‘...’ (‘and so on...‘) operation in the definition itself, but ‘...’ is not {one of the formally allowed operations}/defined.
meant as a warning that this is not formal or well-understood by me. not meant as legitimation.
that said, i think a formal system which allows these along with other desirable math is possible in principle (and this looks related), maybe in a trivial way
as a simpler intuition for why such x/inf statements can be useful: if there is a sequence of infinite 0s which also contains, somewhere in it, just one 1, the portion of 1s is not 0 but 1 in infinity or 1/inf. similar: an infinite sized universe with finite instances of something (which is also trivially possible, e.g a unique center with repeatingly infinite area outwards from it)
this is very interesting, thank you!
my thoughts, ~ in order:
(they all are conditional on this described cosmic inflation theory being true)
this seems to imply there’s a constant ‘time’ factor operating over all worlds at once, and that it’s coherent to say that the arisal of a new universe happens at the ‘same time as’ some specific point in {time of a specific universe}.
i don’t study physics, so i guess it could be true (and we can imagine programs which share that structure)! my intuition says it could also be that each universe happens ‘all at once’, relative to the arisal of universes, i.e that there’s a separate meta-time that universe generation happens along. if so, longtermism is preferred again.
possibly, in the actual physics theory, there’s no ‘meta’ or ontologically-fundamental separation between universes, just causal separation (like the ‘unobservable universe’ in standard cosmology). this would probably imply a shared time / lack of ‘meta-time’
(the below thoughts are as if that implication is true)
for any set of universes in which the short-term-preferring tradeoff is made, that set will be worse off in the long-term. so this does, in a sense, trade short-term happiness for a larger amount of long-term suffering; it’s just that larger amount, by the time it happens, is already outweighed by the short-term happiness in a vastly larger amount of future universes (which itself will also soon be outweighed in those same universes, but not before.. and so on).
this opens the door to two possible values: those which care about always-not-yet-infinite-time, and those which care about ‘infinite’ time as if over the entirety of this neverending pattern. for the latter, longtermism is optimal.
if you’re doing anthropic reasoning, i.e without grounding in any particular world, it’s always the case that your instantiations are more common in the ‘last’ generation. but there is no ‘last’ generation, because the pattern is neverending.
my guess is that there’s no ‘correct anthropic solution’ in response to this, and it just depends on what algorithm an agent is running, but i can imagine some which intake this situation and reason over it as an infinite rather than always-increasing-finite set as a result.[1]
there’s a way to almost-maximize both this and the parts of our values which go against this:
first, and immediately, take the action which results in the most well-being (or other value) in the immediate term. (alas, i was not able to figure out how to make myself feel happy in time)
anything done beyond that brief period is so very exponentially small in comparison. (also, the first such brief period was actually way before we even existed, as is most life[2])
given that, you’ve achieved the vast majority of value that will ever exist under those assumptions (or values). you may now spend the rest of time maximizing your other values (in particular, the ‘infinite-time’ framing of value described in 2).
though, i notice another odd way in which “anything done beyond that brief period is so very exponentially smaller in comparison” could be not quite true. if earlier in-universe-time is so much exponentially larger, then we might expect it to contain an exponentially large amount of boltzmann-brain-like-situations[3] occurring at the very start of times. to bite this bullet would suggest one of two things:
always acting as an immediate-termist, so that a correlated action is taken in the earliest boltzmann-situations
(also see fn3[3])
or for some values, if extreme suffering is, say, of a different class than lighter suffering, always more important than any amount of light suffering; acting to minimize the soonest nearby instance of that.
(reminder: these are just thoughts that i had conditional on the physics theory in the post being true, and i don’t have an inside-view belief about whether it is.)
(slightly edited from fn1 of my earlier comment that misunderstood OP):
possible values respond differently to infinite quantities.
for some, which care about quantity, they will always be maxxed out along all dimensions due to infinite quantity. (at least, unless something they (dis)value occurs with exactly 0% frequency, implying a quantity of 0 - which could, i think, be influenced by portional acausal trade in certain logically-possible circumstances. (i.e maybe not the case in ‘actual reality’ if it’s infinite, but possible at least in some mathematically-definable infinite universes; as a trivial case, a set of infinite
1
s contains no0
s. more fundamentally, an infinite set of universes can be a finite set occurring infinite times.))other values might care about portion—that is, portion of / percentage-frequency within the infinite amount of worlds—the thing that determines the indexical probability of an observation in that context—rather than quantity. (e.g., i think my altruism still cares about this, though it’s really tragic that there’s infinite suffering).
note this difference is separate from whether the agent conceptualizes this world as finite-increasing or infinite
(as an aside, this means that us existing now does not resolve the ‘youngness paradox’; that would require we exist at the first moment of the first observer)
i would rather dissolve the youngness paradox by saying that the probability of our existence is still logically guaranteed (i.e. 1), even if it’s exponentially small in comparative frequency (rather than logical probability); and to the extent it could answer the fermi paradox, answer that instead with mutual anthropic capture
not necessarily lone brains, whose actions don’t effect what observations they receive next as they soon dissolve; but maybe, in particular if it’s required for our actions to have a correlated effect, situations where at least some brief period of action is possible before dissolution.
maybe we’d select for the smallest possible context in which there’s a copy of us, where it can take some form of action, such as ‘thinking the happiest thought it can’ rather than something requiring physical movement, on the basis that such smaller configurations will randomly occur exponentially more often; and then take that smallest action ourselves for the correlated effect.
implies the correct action would be to try to have happy thoughts immediately
Hey again quila, really appreciate your incredibly detailed response, although again I am neglecting important things and unfortunately really don’t have any time to write a detailed response, my sincere apologies for this! By the way, really glad you got more clarity from the other post, I also found this very helpful.
Yes, I think there is a constant time factor. It is all one unified, single space-time, as I understand it (although this also isn’t an area of very high expertise for me,) I think that what causally separates the universes is simply that space is expanding so fast that the universe is are separated by incredible amount of space and don’t have any possibility of colliding again until much, much later in the universe’ s time-lines.
Yes, I believe this is correct. I am pretty uncertain about this.
A reason for believing it might make more sense to say that what matters is the proportion of universes that have greater positive versus negative value, is that intuitively it feels like you should have to specify some time at which you are measuring the total amount of positive versus negative value in all universes, something which we actually know how to, in principle, calculate at any given second, and at any given time along the infinite timeline of the multiverse, every younger second always has 10^10^34 more weight than older seconds.
Nonetheless, it is totally plausible that you should calculate the total value of all universes that will ever exist as though from an outside observer perspective that able to observe the infinity of universes in their entirety all at once.
A very, very crucial point is that this argument is only trying to calculate what is best to do in expectation, and even if you have a strong preference for one or other of these theories, you probably don’t have a preference that is stronger than a few orders of magnitude, so in terms of orders of magnitude it actually doesn’t make much of a difference which you think is correct, as long as there is nonzero credence in the first method.
As a side point, I think that’s actually what is worrying/exciting the about this theory as I think about it more, it’s hard to think of anything that could have more orders of magnitude greater possible impact than this does, except of course any theories where you can either generate or fail to generate infinities of value within our universe; this theory does state that you are creating infinite value since this value will last infinitely into the future universes, but if within this universe you create further infinities, then you have infinities of infinities which trump singular or even just really big infinities.
Yes! I have been editing the post and added something somewhat similar before reading this comment, there are lots of weird implications related to this. Nonetheless, it always continues to be true that this theory might dominate many of the others in terms of expected value, so I think it could make sense to just add it as 1% of our portfolio of doing good (since 1% versus 100% would be not even a rounding error of a rounding error in terms of orders of magnitude,) and hence we don’t have to feel bad about ignoring it forever. I don’t know, maybe that silly. Yes, it certainly does seem like it’s a theory which is unusually easy to compromise with!
And that’s a very interesting point about the Boltzman brains, I hadn’t thought of that before I feel like this theory is so profoundly underdeveloped and uninvestigated that there are probably many, many surprising implications or crucial considerations that might be hiding not too far away
Sorry again for not replying in full, I really am neglecting important things that are somewhat urgent (no pun intended). If there is anything really important you think I missed feel free to comment again, I do greatly appreciate your comments, though just a heads up I will probably only reply very briefly or possibly not at all for now.
it’s okay if you don’t reply. my above comment was treating this post as a schelling point to add my thoughts to the historical archive about this idea.
about ‘living in the moment’ in your other comment: if we ignore influencing boltzmann brains/contexts, then applying ‘ultimate neartermism’ now actually looks more like being a longtermist to enable eventual acausal trades with superintelligence* in a younger universe-point. (* with ‘infinite time’ values, so the trade is preferred to them)
i’m not sure if by ‘these theories’ you meant different physics theories, or these different possible ways of valuing a neverending world (given the paragraphs before the quoted one). if you meant physics theories, then i agree that such quantitative differences matter (this is a weak statement as i’m too confused about infinite universes with different rates-of-increasing to have a stronger statement).
if you meant values:
that’s not how value functions have to be. in principle example: there could be a value function which contains both these and normalizes the scores on each to be within −1 to 1 before summing them.
i don’t think it’s the case that the former function, unnormalized, has a greater range than the latter function. intuitively, it would actually be the case that ‘infinite time’ has an infinitely larger range, but i suspect this is actually more of a different kind of paradox and both would regard this universe as infinite.
paradox between ‘reason over whole universe’ and ‘reason over each timestep in universe’. somehow these appear to not be the same here.
i don’t actually know how to define either of them. i can write a non-terminating number-doubling-program, and ig have that same program also track the sum so far, but i don’t know what it actually means to sum an (at least increasing) infinite series.
actually, a silly idea comes to mind: (if we’re allowed to say[1]) some infinite series like [1/2 + 1⁄4 + 1⁄8 + …] sum to a finite number (1 in that case), then we can also represent the universe going backwards with a decreasing infinite series. i.e., [1 + (1 ÷ 10^10^34) + (1 ÷ 10^10^34^2) + …], where the first term represents the size of the end rather than start of the universe. this way, the calculation at least doesn’t get stuck at infinity. this does end up more clearly implying longtermism, while maintaining the same ratio between size of universe at different times.
but it’s also technically wrong, if the universe has a start but no end, rather than an end but no start.
(though in my intuitive[2] math system, these statements are true: [1/inf > 0], [1/inf × inf = 1], [2/inf × inf = 2]. this could resolve this by letting the start of the universe be represented as 1/10^10^34^inf (so that the increasing infinite series starting from here has the same sum as the decreasing infinite series above).
(i’m not a mathematician). i don’t understand how an infinite series can be writable in existing formal languages—it seems like it would require a ‘...’ (‘and so on...‘) operation in the definition itself, but ‘...’ is not {one of the formally allowed operations}/defined.
meant as a warning that this is not formal or well-understood by me. not meant as legitimation.
that said, i think a formal system which allows these along with other desirable math is possible in principle (and this looks related), maybe in a trivial way
as a simpler intuition for why such x/inf statements can be useful: if there is a sequence of infinite
0
s which also contains, somewhere in it, just one1
, the portion of1
s is not 0 but 1 in infinity or 1/inf. similar: an infinite sized universe with finite instances of something (which is also trivially possible, e.g a unique center with repeatingly infinite area outwards from it)