(edit: the author meant something different and more interesting than what i thought they did when writing this and my next reply. see my reply there for my thoughts on what they really meant)
thanks for sharing this idea. it has a premise that seems conditionally true to me (and that i hadn’t considered before). after that, i think there’s a basic logical mistake. (i appreciate that you posted this even under uncertainty).
short version of my objection: the ‘ultimate neartermism’ argument would apply if the amount of worlds instead started very large, and exponentially diminished over time. because the future of an early world is instead expected to be exponentially many, this fact would instead increase the importance of taking actions which influence the futures of that early world. (the amount of worlds after x days is the same regardless of whether you optimize the near-term or the long-term; but the qualities of these futures differ based on your choice, such as the portion which contain life)
(the original drawn-out-example-case version of my objection, which may be less intuitive to follow, is in a footnote)[1]
i think the premise, that influencing the future of one[2] earlier world is more valuable than influencing the future of one later one, is true (if two assumptions are[3]) and interesting. this would be action-relevant in the following situation:
you’re uncertain about how long the past is. you think it could either be {possibility 1: the past is 1n years long} or {possibility 2: the past is 2n years long}
you would want to act differently in either possibility, for some reason.
(e.g., let’s say that earlier on there’s a higher risk of asteroids or something)
the future of possibility 1 being larger favors taking the actions you would in that world.
While I believe this idea’s consequences could be profound if correct, and it should be taken slightly more seriously than April Fool’s post “Ultra-Near-Termism”, I consider it mostly a quirky novelty
i disagree with the part i italicized. ideas whose consequences are profound if true, and where one doesn’t see a logical reason for the idea to be false, warrant a correspondingly large amount of investigation. an idea being ‘a quirky novelty’ as you put it, or weird-seeming, or not what other EAs seem to be thinking about, does not, in principle, mitigate its importance.
two other factors in the importance of thinking about such ideas:
they might be harder for someone to think about, which could reduce the likelihood that thinking about it more would improve one’s beliefs about it.
(though, other cases of ‘hard to think about’ could mean there’s really useful deconfusion to be had, particularly if something violates one’s background ontology and so is confusing but only at first.)
others might be unlikely to think of such an idea. this increases its neglectedness / makes it more important to raise it for consideration like you did.
(i’ve been in a mindset before where ‘what other people think’ is automatically more relevant than it should be, and i diminish ideas i have as, e.g., ‘probably not important because others would have thought of them’ or ‘probably not true because they imply a common background assumption is false’; i think it’s liberating to be free of this mentality.)
conditional on ‘exponentially branching worldstates’ like you described, actions which influence the futures of an earlier single world are more impactful than actions which influence the futures of a later single world. note the wording, ‘influence the futures of’, rather than the present.
as an explanatory toy model, let’s say there are copies of you in only one worldstate, and the world branches by a factor of 2 each day (after tomorrow there will be two similar worlds each containing a copy of you, then 4 the next day, and so on). if you maximize ‘what is happening soonest in time’ each day, then on day 16 there will be 32768 worlds, with a (probably pretty similar) version of you in each. if you keep on maximizing ‘what is happening soonest in time’, then eventually a static % of worlds start to become lifeless due to unreduced x-risks.
if you instead spend each day trying to first reduce x-risks, then, although there is the same amount of worlds and versions of you after 16 days, the x-risks stop happening sooner (or happen at a lower rate), resulting in a greater total portion of live futures. (the amount of worlds after x days is the same regardless of your choice, but not the portion which are alive)
(i use this odd ‘one [earlier/later] world’ phrasing because it’s possible that, for example, on day 2 the actions of both copies of you are correlated, so your choice is still influencing the rest of the future, not just half of it)
I realized upon reading your response that I was relying very heavily on people either watching the video I referenced or already being quite knowledgeable about this aspect of physics.
I apologize for not being able to answer the entire detailed comment, but I’m quite crunched for time as I spent a few hours being nerd-sniped by myself by taking a few hours to write this post this morning when I had other important work to do haha…
Additionally, I think the response I have is relatively brief, I actually added it to the post itself toward the beginning:
“Based on a comment below, to be clear, this is different from quantum multiverse splitting, as this splitting happens just prior to the Big Bang itself, causing the Big Bang to occur, essentially causing new, distinct bubble universes to form which are completely physically separate from each other, with it being impossible to causally influence any of the younger universes using any known physics as far as I am aware.”
That said, I think that in reference to the quantum multiverse, what you’re saying is probably true and a good defense against quantum nihilism.
For more detail on the multiple levels of multiverse I have in mind, Max Tegmark’s “Mathematical Universe” which is quite popular and includes both of these in his four level multiverse if I remember correctly.
If I am mistaken in some way about this, though, please let me know!
On the meta stuff, however, I think you are probably correct and appreciate the feedback/encouragement.
I think when I have approached technical subjects that I’m not exceptionally knowledgeable about, I have at least one time gotten a lot of pushback and downvotes, even though it was soon after made clear that I was probably not mistaken and was even likely using the technical language correctly.
It seems this may have also occurred when I was not in stylistic aesthetics or epistemic emphasis being appropriately uncertain and hesitant, and because of these, I have moved along the incentive gradient to express higher uncertainty so as to not be completely ignored, though maybe have moved too far in the other direction.
Intuitively though, I do feel this idea is a bit grotesque, and worry that if it became highly popular it might have consequences I actually don’t like.
this is different from quantum multiverse splitting, as this splitting happens just prior to the Big Bang itself, causing the Big Bang to occur, essentially causing new, distinct bubble universes to form which are completely physically separate from each other, with it being impossible to causally influence any of the younger universes using any known physics as far as I am aware.
to paraphrase what i think you mean: “new universes are eternally coming into existence at an exponentially increasing rate, and where no universes can be causally influenced by actions in other ones”. in that case:
because they’re all causally separated, we can ignore which are newer or older and just model the portions between them.
(it’s true that most copies of us would exist in later universes)
given causal separateness: apart from acausal trade, the best action is the same as if there were only one world: to focus on the long term (of that single universe).
(considerations related to acausal trade and infinite universe amount in footnote)[1]
i don’t see where this implies ultimate-neartermism. below i’ll write where i think your reasoning went wrong, if i understood it correctly. (edit: i read hans’ post, and i now see that you indeed meant something different!. i’ll leave the below as an archive.)
[there are exponentially more younger (where younger means later) universes, therefore...] if we are trying to maximize the amount of good across all universes, what we should evidentially care about is what is happening soonest in time across all universes
i could have misinterpreted this somehow, but it seems like a mistake mainly of this form:
(premise) statement A is true for set Y.
statement A being true for set Z would imply statement B is true for set Z.
therefore statement B is true for set Z.
(2) is invalid, because it has not been established that statement A is true of set Z, only that it’s true of set Y.
Applying this to the quote:
for Y:[the set of all possible universes], A:[most universes are younger (existing later in time)].
~A:[most moments[2] are younger (beginning later in time)] being true for Z:[moments within a single universe] implies B:[the majority of moments are the last[3] possible one] for Z
therefore B is true for Z
(my original natural language phrasing: though there are vastly more younger [later] universes, this does not imply younger [later] points in time within a single universe’s time are quantitatively more than those at earlier points.)
i think these are both orthogonal to your argument for ‘ultimate neartermism’.
for acausal trade considerations, just model the portions of different utility across worlds and make the trade accordingly.
though, new universes coming into existence ‘eternally’ (and at a non-diminishing rate) implies an infinite amount. possible values respond differently to this.
for some, which care about quantity, they will always be maxxed out along all dimensions due to infinite quantity* - at least, unless something they care about occurs with exactly 0% frequency—which could, i think, be influenced by portional acausal trade in certain logically-possible circumstances. (i.e maybe not possible for ‘actual reality’, but possible at least in some mathematical universes)
other utility functions might care about portion—that is, portion of / frequency within the infinite amount of worlds—rather than quantity. (e.g., i think my altruism still cares about this, though it’s really tragic that there’s infinite suffering). these ones acausal trade with each other.
* actually, that’s not necessarily true. it can be reasoned that the amount is never actually infinite, only exponentially large, no matter how long it continues (2^x never does reach infinity), in which case at any actual point in time, quantity can still be increased / decreased.
it seems to me that another understandable language mistake was made: a ‘younger universe’ (i.e a universe which begun to exist after already-existing (older) ones) sounds like it would, when translated to a single universe, mean ‘an earlier point in time within that universe’; after all, a universe where less time has passed is younger. but ‘younger’ actually meant ‘occurs later’, in that context, plus we’re now discussing moments rather than universes.
(edit: the author meant something different and more interesting than what i thought they did when writing this and my next reply. see my reply there for my thoughts on what they really meant)
thanks for sharing this idea. it has a premise that seems conditionally true to me (and that i hadn’t considered before). after that, i think there’s a basic logical mistake. (i appreciate that you posted this even under uncertainty).
short version of my objection: the ‘ultimate neartermism’ argument would apply if the amount of worlds instead started very large, and exponentially diminished over time. because the future of an early world is instead expected to be exponentially many, this fact would instead increase the importance of taking actions which influence the futures of that early world. (the amount of worlds after x days is the same regardless of whether you optimize the near-term or the long-term; but the qualities of these futures differ based on your choice, such as the portion which contain life)
(the original drawn-out-example-case version of my objection, which may be less intuitive to follow, is in a footnote)[1]
i think the premise, that influencing the future of one[2] earlier world is more valuable than influencing the future of one later one, is true (if two assumptions are[3]) and interesting. this would be action-relevant in the following situation:
you’re uncertain about how long the past is. you think it could either be {possibility 1: the past is 1n years long} or {possibility 2: the past is 2n years long}
you would want to act differently in either possibility, for some reason.
(e.g., let’s say that earlier on there’s a higher risk of asteroids or something)
the future of possibility 1 being larger favors taking the actions you would in that world.
also account for the footnote[2]
finally, and more meta:
i disagree with the part i italicized. ideas whose consequences are profound if true, and where one doesn’t see a logical reason for the idea to be false, warrant a correspondingly large amount of investigation. an idea being ‘a quirky novelty’ as you put it, or weird-seeming, or not what other EAs seem to be thinking about, does not, in principle, mitigate its importance.
two other factors in the importance of thinking about such ideas:
they might be harder for someone to think about, which could reduce the likelihood that thinking about it more would improve one’s beliefs about it.
(though, other cases of ‘hard to think about’ could mean there’s really useful deconfusion to be had, particularly if something violates one’s background ontology and so is confusing but only at first.)
others might be unlikely to think of such an idea. this increases its neglectedness / makes it more important to raise it for consideration like you did.
(i’ve been in a mindset before where ‘what other people think’ is automatically more relevant than it should be, and i diminish ideas i have as, e.g., ‘probably not important because others would have thought of them’ or ‘probably not true because they imply a common background assumption is false’; i think it’s liberating to be free of this mentality.)
conditional on ‘exponentially branching worldstates’ like you described, actions which influence the futures of an earlier single world are more impactful than actions which influence the futures of a later single world. note the wording, ‘influence the futures of’, rather than the present.
as an explanatory toy model, let’s say there are copies of you in only one worldstate, and the world branches by a factor of 2 each day (after tomorrow there will be two similar worlds each containing a copy of you, then 4 the next day, and so on). if you maximize ‘what is happening soonest in time’ each day, then on day 16 there will be 32768 worlds, with a (probably pretty similar) version of you in each. if you keep on maximizing ‘what is happening soonest in time’, then eventually a static % of worlds start to become lifeless due to unreduced x-risks.
if you instead spend each day trying to first reduce x-risks, then, although there is the same amount of worlds and versions of you after 16 days, the x-risks stop happening sooner (or happen at a lower rate), resulting in a greater total portion of live futures. (the amount of worlds after x days is the same regardless of your choice, but not the portion which are alive)
(i use this odd ‘one [earlier/later] world’ phrasing because it’s possible that, for example, on day 2 the actions of both copies of you are correlated, so your choice is still influencing the rest of the future, not just half of it)
the assumptions:
the world is branching
total time* is finite
* by total time, i mean time during which (dis)value happens; this could mean pre-‘heat death’
as i have not studied physics, i’m uncertain about both of these.
Thank you, I appreciate your comment very much.
I realized upon reading your response that I was relying very heavily on people either watching the video I referenced or already being quite knowledgeable about this aspect of physics.
I apologize for not being able to answer the entire detailed comment, but I’m quite crunched for time as I spent a few hours being nerd-sniped by myself by taking a few hours to write this post this morning when I had other important work to do haha…
Additionally, I think the response I have is relatively brief, I actually added it to the post itself toward the beginning:
“Based on a comment below, to be clear, this is different from quantum multiverse splitting, as this splitting happens just prior to the Big Bang itself, causing the Big Bang to occur, essentially causing new, distinct bubble universes to form which are completely physically separate from each other, with it being impossible to causally influence any of the younger universes using any known physics as far as I am aware.”
That said, I think that in reference to the quantum multiverse, what you’re saying is probably true and a good defense against quantum nihilism.
For more detail on the multiple levels of multiverse I have in mind, Max Tegmark’s “Mathematical Universe” which is quite popular and includes both of these in his four level multiverse if I remember correctly.
If I am mistaken in some way about this, though, please let me know!
On the meta stuff, however, I think you are probably correct and appreciate the feedback/encouragement.
I think when I have approached technical subjects that I’m not exceptionally knowledgeable about, I have at least one time gotten a lot of pushback and downvotes, even though it was soon after made clear that I was probably not mistaken and was even likely using the technical language correctly.
It seems this may have also occurred when I was not in stylistic aesthetics or epistemic emphasis being appropriately uncertain and hesitant, and because of these, I have moved along the incentive gradient to express higher uncertainty so as to not be completely ignored, though maybe have moved too far in the other direction.
Intuitively though, I do feel this idea is a bit grotesque, and worry that if it became highly popular it might have consequences I actually don’t like.
to paraphrase what i think you mean: “new universes are eternally coming into existence at an exponentially increasing rate, and where no universes can be causally influenced by actions in other ones”. in that case:
because they’re all causally separated, we can ignore which are newer or older and just model the portions between them.
(it’s true that most copies of us would exist in later universes)
given causal separateness: apart from acausal trade, the best action is the same as if there were only one world: to focus on the long term (of that single universe).
(considerations related to acausal trade and infinite universe amount in footnote)[1]
i don’t see where this implies ultimate-neartermism. below i’ll write where i think your reasoning went wrong, if i understood it correctly. (edit: i read hans’ post, and i now see that you indeed meant something different!. i’ll leave the below as an archive.)
i could have misinterpreted this somehow, but it seems like a mistake mainly of this form:
(premise) statement A is true for set Y.
statement A being true for set Z would imply statement B is true for set Z.
therefore statement B is true for set Z.
(2) is invalid, because it has not been established that statement A is true of set Z, only that it’s true of set Y.
Applying this to the quote:
for Y:[the set of all possible universes], A:[most universes are younger (existing later in time)].
~A:[most moments[2] are younger (beginning later in time)] being true for Z:[moments within a single universe] implies B:[the majority of moments are the last[3] possible one] for Z
therefore B is true for Z
(my original natural language phrasing: though there are vastly more younger [later] universes, this does not imply younger [later] points in time within a single universe’s time are quantitatively more than those at earlier points.)
i think these are both orthogonal to your argument for ‘ultimate neartermism’.
for acausal trade considerations, just model the portions of different utility across worlds and make the trade accordingly.
though, new universes coming into existence ‘eternally’ (and at a non-diminishing rate) implies an infinite amount. possible values respond differently to this.
for some, which care about quantity, they will always be maxxed out along all dimensions due to infinite quantity* - at least, unless something they care about occurs with exactly 0% frequency—which could, i think, be influenced by portional acausal trade in certain logically-possible circumstances. (i.e maybe not possible for ‘actual reality’, but possible at least in some mathematical universes)
other utility functions might care about portion—that is, portion of / frequency within the infinite amount of worlds—rather than quantity. (e.g., i think my altruism still cares about this, though it’s really tragic that there’s infinite suffering). these ones acausal trade with each other.
* actually, that’s not necessarily true. it can be reasoned that the amount is never actually infinite, only exponentially large, no matter how long it continues (2^x never does reach infinity), in which case at any actual point in time, quantity can still be increased / decreased.
(also, a slightly different statement A is used in (2): about moments rather than universes)
it seems to me that another understandable language mistake was made: a ‘younger universe’ (i.e a universe which begun to exist after already-existing (older) ones) sounds like it would, when translated to a single universe, mean ‘an earlier point in time within that universe’; after all, a universe where less time has passed is younger. but ‘younger’ actually meant ‘occurs later’, in that context, plus we’re now discussing moments rather than universes.