summary of my comment: in anthropics problems where {what your meaning of ‘probability’ yields} is unclear, instead of focusing on ‘what the probability is’, focus on fully comprehending the situation and then derive what action you prefer to take.[1]
imv this is making a common mistake (“conflating logical and indexical uncertainty”). here’s what i think the correct reasoning is. it can be done without ever updating our probability about what the rate is.
i write two versions of this for two cases:
case 1, where something like many-worlds is true, in which case as long as the rate of omnideath events is below ~100%, there will always be observers like you.
case 2, where there is only one small world, in which case if the rate of an omnideath event occurring at least once were high there might be ‘no observers’ or ‘no instantiated copies of you’.
i try to show these are actually symmetrical.
finally i show symmetry to a case 3, where our priors are 50⁄50 between two worlds, in one of which we certainly (rather than probably) would not exist; we do not need to update our priors (in response to our existence), even in that case, to choose what to do.
case 1: where something like many-worlds is true.
we’re uncertain about rate of omnideath events.
simplifying premise: the rate is discretely either high or very low, and we start off with 50% credence on each
we’re instantiated ≥1 times in either case, so our existence does not have any probability of ruling either case out.
we could act as if the rate is low, or act as if it is high. these have different ramifications:
if we act as if the rate is low, this has expected value equal to: prior(low rate) × {value-of-action per world, conditional on low rate} × {amount of worlds we’re in, conditional on low rate}
if we act as if the rate is high, this has expected value equal to: prior(high rate) × {value-of-action per world, conditional on high rate} × {amount of worlds we’re in, conditional on high rate}
(if we assume similar conditional ‘value-of-action per world’ in each), then acting as if the rate is low has higher EV (on those 50⁄50 priors), because the amount of worlds we’re in is much higher if the rate of omnideath events is low.
note that there is no step here where the probability is updated away from 50⁄50.
probability is the name we give to a variable in an algorithm, and this above algorithm goes through all the relevant steps without using that term. to focus on ‘what the probability becomes’ is just a question of definitions: for instance, if defined as {more copies of me in a conditional = (by definition) ‘higher probability’ assigned to that conditional}, then that sense of probability would ‘assign more to a low rate’ (a restatement of the end of step 4).
onto case 2: single world
(i expect the symmetry to be unintuitive. i imagine a reader having had a thought process like this:
“if i imagine a counterfactual where the likelihood of omnideath events per some unit of time is high, then because [in this case 2] there is only one world, it could be the case that my conditionals look like this: ‘there is one copy of me if the rate is low, but none at all if the rate is high’. that would be probabilistic evidence against the possibility it’s high, right? because i’m conditioning on my existence, which i know for sure occurs at least once already.” (and i see this indicated in another commenter’s, “I also went on quite a long digression trying to figure out if it was possible to rescue Anthropic Shadow by appealing to the fact that there might be large numbers of other worlds containing life”)
i don’t know if i can make the symmetry intuitive, so i may have to abandon some readers here.)
prior: uncertain about rate of omnideath events.
(same simplifying premise)
i’m instantiated exactly once.
i’m statistically more probable to be instantiated exactly once if the rate is low.
though, a world with a low rate of omnideath events could still have one happen, in which case i wouldn’t exist conditional on that world.
i’m statistically more probable to be instantiated exactly zero times if the rate is high.
though, a world with a high rate of omnideath events could still have one not happen, in which case i would exist conditional on that world.
(again then going right to the EV calculation):
we could act as if the rate is low, or act as if it is high. these have different ramifications:
if we act as if the rate is low, this has expected value equal to: prior(low rate) × {value-of-action in the sole world, conditional on low rate} × {statistical probability we are alive, conditional on low rate}
if we act as if the rate is high, this has expected value equal to: prior(high rate) × {value-of-action in the sole world, conditional on high rate} × {statistical probability we are alive, conditional on high rate}
(these probabilities are conditional on either rate already being true, i.e p(alive|some-rate), but we’re not updating the probabilities of either rate themselves. this would violate bayes’ formula if we conditioned on ‘alive’, but we’re intentionally avoiding that here.
in a sense we’re avoiding it to rescue the symmetry of these three cases. i suspect {not conditioning on one’s own existence, rather preferring to act as if one does exist per EV calc} could help with other anthropics problems too.)
(if we assume similar conditional ‘value-of-action’ in each sole world), then acting as if the rate is low has higher EV (on those 50⁄50 priors), because ‘statistical probability we are alive’ is much higher if the rate of omnideath events is low.
(i notice i could use the same sentences with only a few changes from case 1, so maybe this symmetry will be intuitive)
(to be clear, in reality our prior might not be 50⁄50 (/fully uncertain); e.g., maybe we’re looking at the equations of physics and we see that it looks like there should be chain reactions which destroy the universe happening frequently, but which are never 100% likely; or maybe we instead see that they suggest world-destroying chain reactions are impossible.)
main takeaway: in anthropics problems where {what your sense of of ‘probability’ yields} is unclear, instead of focusing on ‘what the probability is’, focus on fully comprehending the situation and then derive what action you prefer to take.
(case 3) i’ll close with a further, more fundamental-feeling insight: this symmetry holds even when we consider a rate of 100% rather than just ‘high’. as in, you can run that through this same structure and it will give you the correct output.
the human worded version of that looks like, “i am a fixed mathematical function. if there are no copies of me in the world[2], then the output of this very function is irrelevant to what happens in the world. if there are copies of me in the world, then it is relevant / directly corresponds to what effects those copies have. therefore, because i am a mathematical function which cares about what happens in worlds (as opposed to what the very function itself does), i will choose my outputs as if there are copies of me in the world, even though there is a 50% chance that there are none.”
(as in the ‘sleeping beauty problem’, once you have all the relevant variables, you can run simulations / predict what choice you’d prefer to take (such as if offered to bet that the day is tuesday) without ever ‘assigning a probability’)
(note: ‘the world’ is underdefined here, ‘the real world’ is not a fundamental mathematical entity that can be pointed to in any formal systems we know of, though we can fix this by having the paragraph, instead of referring to ‘the world’, refer to some specific function which the one in question is uncertain about whether itself is contained in it.
we seem to conceptualize the ‘world’ we are in as some particular ‘real world’ and not just some abstract mathematical function of infinitely many. i encourage thinking more about this, though it’s unrelated to the main topic of this comment.)
summary of my comment: in anthropics problems where {what your meaning of ‘probability’ yields} is unclear, instead of focusing on ‘what the probability is’, focus on fully comprehending the situation and then derive what action you prefer to take.[1]
imv this is making a common mistake (“conflating logical and indexical uncertainty”). here’s what i think the correct reasoning is. it can be done without ever updating our probability about what the rate is.
i write two versions of this for two cases:
case 1, where something like many-worlds is true, in which case as long as the rate of omnideath events is below ~100%, there will always be observers like you.
case 2, where there is only one small world, in which case if the rate of an omnideath event occurring at least once were high there might be ‘no observers’ or ‘no instantiated copies of you’.
i try to show these are actually symmetrical.
finally i show symmetry to a case 3, where our priors are 50⁄50 between two worlds, in one of which we certainly (rather than probably) would not exist; we do not need to update our priors (in response to our existence), even in that case, to choose what to do.
case 1: where something like many-worlds is true.
we’re uncertain about rate of omnideath events.
simplifying premise: the rate is discretely either high or very low, and we start off with 50% credence on each
we’re instantiated ≥1 times in either case, so our existence does not have any probability of ruling either case out.
we could act as if the rate is low, or act as if it is high. these have different ramifications:
if we act as if the rate is low, this has expected value equal to: prior(low rate) × {value-of-action per world, conditional on low rate} × {amount of worlds we’re in, conditional on low rate}
if we act as if the rate is high, this has expected value equal to: prior(high rate) × {value-of-action per world, conditional on high rate} × {amount of worlds we’re in, conditional on high rate}
(if we assume similar conditional ‘value-of-action per world’ in each), then acting as if the rate is low has higher EV (on those 50⁄50 priors), because the amount of worlds we’re in is much higher if the rate of omnideath events is low.
note that there is no step here where the probability is updated away from 50⁄50.
probability is the name we give to a variable in an algorithm, and this above algorithm goes through all the relevant steps without using that term. to focus on ‘what the probability becomes’ is just a question of definitions: for instance, if defined as {more copies of me in a conditional = (by definition) ‘higher probability’ assigned to that conditional}, then that sense of probability would ‘assign more to a low rate’ (a restatement of the end of step 4).
onto case 2: single world
(i expect the symmetry to be unintuitive. i imagine a reader having had a thought process like this:
“if i imagine a counterfactual where the likelihood of omnideath events per some unit of time is high, then because [in this case 2] there is only one world, it could be the case that my conditionals look like this: ‘there is one copy of me if the rate is low, but none at all if the rate is high’. that would be probabilistic evidence against the possibility it’s high, right? because i’m conditioning on my existence, which i know for sure occurs at least once already.” (and i see this indicated in another commenter’s, “I also went on quite a long digression trying to figure out if it was possible to rescue Anthropic Shadow by appealing to the fact that there might be large numbers of other worlds containing life”)
i don’t know if i can make the symmetry intuitive, so i may have to abandon some readers here.)
prior: uncertain about rate of omnideath events.
(same simplifying premise)
i’m instantiated exactly once.
i’m statistically more probable to be instantiated exactly once if the rate is low.
though, a world with a low rate of omnideath events could still have one happen, in which case i wouldn’t exist conditional on that world.
i’m statistically more probable to be instantiated exactly zero times if the rate is high.
though, a world with a high rate of omnideath events could still have one not happen, in which case i would exist conditional on that world.
(again then going right to the EV calculation):
we could act as if the rate is low, or act as if it is high. these have different ramifications:
if we act as if the rate is low, this has expected value equal to: prior(low rate) × {value-of-action in the sole world, conditional on low rate} × {statistical probability we are alive, conditional on low rate}
if we act as if the rate is high, this has expected value equal to: prior(high rate) × {value-of-action in the sole world, conditional on high rate} × {statistical probability we are alive, conditional on high rate}
(these probabilities are conditional on either rate already being true, i.e p(alive|some-rate), but we’re not updating the probabilities of either rate themselves. this would violate bayes’ formula if we conditioned on ‘alive’, but we’re intentionally avoiding that here.
in a sense we’re avoiding it to rescue the symmetry of these three cases. i suspect {not conditioning on one’s own existence, rather preferring to act as if one does exist per EV calc} could help with other anthropics problems too.)
(if we assume similar conditional ‘value-of-action’ in each sole world), then acting as if the rate is low has higher EV (on those 50⁄50 priors), because ‘statistical probability we are alive’ is much higher if the rate of omnideath events is low.
(i notice i could use the same sentences with only a few changes from case 1, so maybe this symmetry will be intuitive)
(to be clear, in reality our prior might not be 50⁄50 (/fully uncertain); e.g., maybe we’re looking at the equations of physics and we see that it looks like there should be chain reactions which destroy the universe happening frequently, but which are never 100% likely; or maybe we instead see that they suggest world-destroying chain reactions are impossible.)
main takeaway: in anthropics problems where {what your sense of of ‘probability’ yields} is unclear, instead of focusing on ‘what the probability is’, focus on fully comprehending the situation and then derive what action you prefer to take.
(case 3) i’ll close with a further, more fundamental-feeling insight: this symmetry holds even when we consider a rate of 100% rather than just ‘high’. as in, you can run that through this same structure and it will give you the correct output.
the human worded version of that looks like, “i am a fixed mathematical function. if there are no copies of me in the world[2], then the output of this very function is irrelevant to what happens in the world. if there are copies of me in the world, then it is relevant / directly corresponds to what effects those copies have. therefore, because i am a mathematical function which cares about what happens in worlds (as opposed to what the very function itself does), i will choose my outputs as if there are copies of me in the world, even though there is a 50% chance that there are none.”
(as in the ‘sleeping beauty problem’, once you have all the relevant variables, you can run simulations / predict what choice you’d prefer to take (such as if offered to bet that the day is tuesday) without ever ‘assigning a probability’)
(note: ‘the world’ is underdefined here, ‘the real world’ is not a fundamental mathematical entity that can be pointed to in any formal systems we know of, though we can fix this by having the paragraph, instead of referring to ‘the world’, refer to some specific function which the one in question is uncertain about whether itself is contained in it.
we seem to conceptualize the ‘world’ we are in as some particular ‘real world’ and not just some abstract mathematical function of infinitely many. i encourage thinking more about this, though it’s unrelated to the main topic of this comment.)