To answer the first question, no, the argument doesn’t rely on SIA. Let me know if the following is helpful.
Suppose your prior (perhaps after studying plate tectonics and so on, but not after considering the length of time that’s passed without an an extinction-inducing supervolcano) is that there’s probability “P(A)”=0.5 that risk of an extinction-inducing supervolcano at the end of each year is 1⁄2 and probability “P(B)”=0.5 that the risk is 1⁄10. Suppose that the world lasts at least 1 year and most 3 years regardless.
Let “A1” be the possible world in which the risk was 1⁄2 per year and we blew up at the end of year 1, “A2” be that in which the risk was 1⁄2 per year and we blew up at the end of year 2, and “A3” be that in which the risk was 1⁄2 per year and we never blew up, so that we got to exist for 3 years. Define B1, B2, B3 likewise for the risk=1/10 worlds.
Suppose there’s one “observer per year” before the extinction event and zero after, and let “Cnk”, with k<=n, be observer #k in world Cn (C can be A or B). So there are 12 possible observers: A11, A21, A22, A31, A32, A33, and likewise for the Bs.
If you are observer Cnk, your evidence is that you are observer #k. The question is what Pr(A|k) is; what probability you should assign to the annual risk being 1⁄2 given your evidence.
Any Bayesian, whether following SIA or SSA (or anything else), agrees that
Pr(A|k) = Pr(k|A)Pr(A)/Pr(k),
where Pr(.) is the credence an observer should have for an event according to a given anthropic principle. The anthropic principles disagree about the values of these credences, but here the disagreements cancel out. Note that we do not necessarily have Pr(A)=P(A): in particular, if the prior P(.) assigns equal probability to two worlds, SIA will recommend assigning higher credence Pr(.) to the one with more observers, e.g. by giving an answer of Pr(coin landed heads) = 1⁄3 in the sleeping beauty problem, where on this notation P(coin landed heads) = 1⁄2.
On SSA, your place among the observers is in effect generated first by randomizing among the worlds according to your prior and then by randomizing among the observers in the chosen world. So Pr(A)=0.5, and
Pr(1|A) = 1⁄2 + 1/4*1/2 + 1/4*1/3 = 17⁄24
(since Pr(n=1|A)=1/2, in which case k=1 for sure; Pr(n=2|A)=1/4, in which case k=1 with probability 1⁄2; and Pr(n=3|A)=1/4, in which case k=1 with probability 1⁄3);
Pr(2|A) = 1/4*1/2 + 1/4*1/3 = 5⁄24; and
Pr(3|A) = 1/4*1/3 = 2⁄24.
For simplicity we can focus on the k=2 case, since that’s the case analogous to people like us, in the middle of an extended history. Going through the same calculation for the B worlds gives Pr(2|B) = 63⁄200, so Pr(2) = 0.5*5/24 + 0.5*63/200 = 157⁄600.
So Pr(A|2) = 125⁄314 ≈ 0.4.
On SIA, your place among the observers is generated by randomizing among the observers, giving proportionally more weight to observers in worlds with proportionally higher prior probability, so that the probability of being observer Cnk is
1/12*Pr(Cn) / [sum over possible observers, labeled “Dmj”, of (1/12*Pr(Dm))].
This works out to Pr(2|A) = 2⁄7 [6 possible observers given A, but the one in the n=1 world “counts for double” since that world is twice as likely than the n=2 or =3 worlds a priori];
Pr(A) = 175⁄446 [less than 1⁄2 since there are fewer observers in expectation when the risk of early extinction is higher], and
Pr(2) = 140⁄446, so
Pr(A|2) = 5⁄14 ≈ 0.36.
So in both cases you update on the fact that a supervolcano did not occur at the end of year 1, from assigning probability 0.5 to the event that the underlying risk is 1⁄2 to assigning some lower probability to this event.
But I said that the disagreements canceled out, and here it seems that they don’t cancel out! This is because the anthropic principles disagree about Pr(A|2) for a reason other than the evidence provided by the lack of a supervolcano at the end of year 1: namely the possible existence of year 3. How to update on the fact that you’re in year 2 when you “could have been” in year 3 gets into doomsday argument issues, which the principles do disagree on. I included year 3 in the example because I worried it might seem fishy to make the example all about a 2-period setting where, in period 2, the question is just “what was the underlying probability we would make it here”, with no bearing on what probability we should assign to making it to the next period. But since this is really the example that isolates the anthropic shadow consideration, observe that if we simplify things so that the world lasts at most 2 years (and there 6 possible observers), SSA gives
An anthropic principle that would assign a different value to Pr(A|2)--for the extreme case of sustaining the “anthropic shadow”, a principle that would assign Pr(A|2)=Pr(A)=1/2--would be one in which your place among the observers is generated by
first randomizing among times k (say, assigning k=1 and k=2 equal probability);
then over worlds with an observer alive at k, maintaining your prior of Pr(A)=1/2;
[and then perhaps over observers at that time, but in this example there is only one].
This is more in the spirit of SSA than SIA, but it is not SSA, and I don’t think anyone endorses it. SSA randomizes over worlds and then over observers within each world, so that observing that you’re late in time is indeed evidence that “most worlds last late”.
Ok great! And sorry the numbers in my example got unwieldy, I just picked some probabilities at the beginning and ran with them, instead of bothering to reverse-engineer something cleaner…
But this example relies on there just being one planet. If there are >1 planets, each with two periods, we are back to having an anthropic shadow again.
Let’s consider the case with 2 planets. Let’s call them x and y.
According to SSA:
Given A, there are 4 different possibilities, each with probability 1/4:
No catastrophe on either planet.
Catastrophe on x.
Catastrophe on y.
Catastrophe on both.
Let’s say you observe yourself to be alive at time-step 2 on planet x.
Where A is true—Where no catastrophes happen. Probability: 0.5*1/4
Where A is true—Where there’s only a catastrophe on y. Probability: 0.5*1/4
Where B is true—Where no catastrophes happen. Probability: 0.5*(9/10)^2
Where B is true—Where there’s only a catastrophe on y. Probability: 0.5*9/10*1/10
The total sum of x2 measure in worlds where A is true is 0.5*1/4 + 0.5*1/4 = 0.25.
The total sum of x2 measure is 0.5*1/4 + 0.5*1/4 + 0.5*(9/10)^2 + 0.5*9/10*1/10 = 0.7
Pr(A|x2) = 0.25/0.7 ~= 0.357.
The difference would be somewhat larger with >2 planets. (But would never be very large. Unless you changed the SSA reference classes so that you’re e.g. only counting observers at period 2.)
Also: The mechanism of action here is the correlation between there being a survivor alive at x2 and there being a greater number of total observers in your reference class. There are multiple ways to break this:
If you have a universe with both A planets and B planets (i.e. each planet has a 50% probability of being an A planet and a 50% probability of being a B planet) then there will once again not be any difference between SIA and SSA. (Because then there’s no correlation between x2 and the total number of observers.)
Alternatively, if there’s a sufficiently large “background population” of people in your reference class whose size is equally large regardless of whether there’s a survivor at x2, then the correlation between x2 and the total number of observers can become arbitrarily small, and so the difference between SIA and SSA can become arbitrarily small.
Overall: I don’t think SSA-style anthropic shadows of any significant size are real. Because I think SSA is unreasonable, and because I think SSA with small/restrictive reference classes is especially unreasonable. And with large reference classes, it seems unlikely to me that there are large correlations between our possible historic demise and the total number of observers. (For reasons like the above two bullet points.)
Oh, also, re the original paper, I do think that even given SSA, Teru’s argument that Jack and Jill have equivalent epistemic perspectives is correct. (Importantly: As long as Jack and Jill uses the same SSA reference classes, and those reference classes don’t treat Jack and Jill any differently.)
Since the core mechanism in my above comment is the correlation between x2 and the total number of observers, I think Jill the Martian would also arrive at different Pr(A) depending on whether she was using SSA or SIA.
(But Teru doesn’t need to get into any of this, because he effectively rejects SSA towards the end of the section “Barking Dog vs The Martians” (p12-14 of the pdf). Referring to his previous paper Doomsday and objective chances.)
Interesting, thanks for pointing this out! And just to note, that result doesn’t rely on any sort of suspicious knowledge about whether you’re on the planet labeled “x” or “y”; one could also just say “given that you observe that you’re in period 2, …”.
I don’t think it’s right to describe what’s going on here as anthropic shadow though, for the following reason. Let me know what you think.
To make the math easier, let me do what perhaps I should have done from the beginning and have A be the event that the risk is 50% and B be the event that it’s 0%. So in the one-planet case, there are 3 possible worlds:
A1 (prior probability 25%) -- risk is 50%, lasts one period
A2 (prior probability 25%) -- risk is 50%, lasts two periods
B (prior probability 50%) -- risk is 0%, lasts two periods
At time 1, whereas SIA tells us to put credence of 1⁄2 on A, SSA tells us to put something higher--
--because a higher fraction of expected observers are at period 1 given A than given B. This is the Doomsday Argument. When we reach period 2, both SSA and SIA then tell us to update our credence in A downward. Both principles tell us fully to update downward for the same reasons that we would update downward on the probability of an event that didn’t change the number of observers: e.g. if A is the event you live in a place where the probability of rain per day is 50% and B is the event that it’s 0%; you start out putting credence 50% [or 60%] on A; and you make it to day 2 without rain (and would live to see day 2 either way). But in the catastrophe case SSA further has you update downward because the Doomsday Argument stops applying in period 2.
One way to put the general lesson is that, as time goes on and you learn how many observers there are, SSA has less room to shift probability mass (relative to SIA) toward the worlds where there are fewer observers.
In the case above, once you make it to period 2, that uncertainty is fully resolved: given A or B, you know you’re in a world with 2 observers. This is enough to motivate such a big update according to SSA that at the end the two principles agree on assigning probability 1⁄3 to A.
In cases where uncertainty about the number of observers is only partially resolved in the move from period 1 to period 2--as in my 3-period example, or in your 2-planet example*--then the principles sustain some disagreement in period 2. This is because
SSA started out in period 1 assigning a higher credence to A than SIA;
both recommend updating on the evidence given by survival as you would update on anything else, like lack of rain;
SSA further updates downward because the Doomsday Argument partially loses force; and
the result is that SSA still assigns a higher credence to A than SIA.
*To verify the Doomsday-driven disagreement in period 1 in the two-planet case explicitly (with the simpler definitions of A and B), there are 5 possible worlds:
A1 (prior probability 12.5%) -- risk is 50% per planet, both last one period
A2 (prior probability 12.5%) -- risk is 50% per planet, only x lasts two periods
A3 (prior probability 12.5%) -- risk is 50% per planet, only y lasts two periods
A4 (prior probability 12.5%) -- risk is 50% per planet, both last two periods
B (prior probability 50%) -- risk is 0% per planet, both last two periods
In period 1, SIA gives credence in A of 1⁄2; SSA gives (0.125 + 0.125*2/3 + 0.125*2/3 + 0.125/2) / (0.125 + 0.125*2/3 + 0.125*2/3 + 0.125/2 + 0.5/2) = 5⁄8.
One could use the term “anthropic shadow” to refer to the following fact: As time goes on, in addition to inferring existential risks are unlikely as we would infer that rain is unlikely, SSA further recommends inferring that existential risks are unlikely by giving up the claim that we’re more likely to be in a world with fewer observers; but this second update is attenuated by the (possible) existence of other planets. I don’t have any objection to using the term that way and I do think it’s an interesting point. But I think the old arguments cited in defense of an “anthropic shadow” effect were pretty clearly arguing for the view that we should update less (or even not at all) toward thinking existential risk per unit time is low as time goes on than we would update about the probabilities per unit time of other non-observed events.
The main question that remains for me (only paranthetically alluded to in my above comment) is:
Do we get something that deserves to be called an “anthropic shadow” for any particular, more narrow choice of “reference class”, and...
can the original proposes of an “anthropic shadow” be read as proposing that we should work with such reference classes?
I think the answer to the first question is probably “yes” if we look at a reference class that changes over time, something like R_t = “people alive at period t of development in young civilizations’ history”.
I don’t know about the answer to the second question. I think R_t seems like kind of a wild reference class to work with, but I never really understood how reference classes were supposed to be chosen for SSA, so idk what SSA’s proponents thinks is reasonable vs. not.
With some brief searches/skim in the anthropic shadow paper… I don’t think they discuss the topic in enough depth that they can be said to have argued for such a reference class, and it seems like a pretty wild reference class to just assume. (They never mention either the term “reference class” or even any anthropic principles like SSA.)
And ok, I agree that the answer to the first question is probably “yes”, so maybe what I was calling an alternative anthropic principle in my original comment could be framed as SSA with this directly time-centric reference class. If so, instead of saying “that’s not SSA”, I should have said “that’s not SSA with a standard reference class (or a reference class anyone seems to have argued for)”. I agree that Bostrom et al. (2010) don’t seem to argue for such a reference class.
On my reading (and Teru’s, not coincidentally), the core insight Bostrom et al. have (and iterate on) is equivalent to the insight that if you haven’t observed something before, and you assign it a probability per unit of time equal to its past frequency, then you must be underestimating its probability per unit of time. The response isn’t that this is predicated on, or arguing for, any weird view on anthropics, but just that it has nothing to do with anthropics: it’s true, but for the same reason that you’ll underestimate the probability of rain per unit time based on past frequency if it’s never rained (though in the prose they convey their impression that the fact that you wouldn’t exist in the event of a catastrophe is what’s driving the insight). The right thing to do in both cases is to have a prior and update the probability downward as the dry spell lengthens. A nonstandard anthropic principle (or reference class) is just what would be necessary to motivate a fundamental difference from “no rain”.
To answer the first question, no, the argument doesn’t rely on SIA. Let me know if the following is helpful.
Suppose your prior (perhaps after studying plate tectonics and so on, but not after considering the length of time that’s passed without an an extinction-inducing supervolcano) is that there’s probability “P(A)”=0.5 that risk of an extinction-inducing supervolcano at the end of each year is 1⁄2 and probability “P(B)”=0.5 that the risk is 1⁄10. Suppose that the world lasts at least 1 year and most 3 years regardless.
Let “A1” be the possible world in which the risk was 1⁄2 per year and we blew up at the end of year 1, “A2” be that in which the risk was 1⁄2 per year and we blew up at the end of year 2, and “A3” be that in which the risk was 1⁄2 per year and we never blew up, so that we got to exist for 3 years. Define B1, B2, B3 likewise for the risk=1/10 worlds.
Suppose there’s one “observer per year” before the extinction event and zero after, and let “Cnk”, with k<=n, be observer #k in world Cn (C can be A or B). So there are 12 possible observers: A11, A21, A22, A31, A32, A33, and likewise for the Bs.
If you are observer Cnk, your evidence is that you are observer #k. The question is what Pr(A|k) is; what probability you should assign to the annual risk being 1⁄2 given your evidence.
Any Bayesian, whether following SIA or SSA (or anything else), agrees that
Pr(A|k) = Pr(k|A)Pr(A)/Pr(k),
where Pr(.) is the credence an observer should have for an event according to a given anthropic principle. The anthropic principles disagree about the values of these credences, but here the disagreements cancel out. Note that we do not necessarily have Pr(A)=P(A): in particular, if the prior P(.) assigns equal probability to two worlds, SIA will recommend assigning higher credence Pr(.) to the one with more observers, e.g. by giving an answer of Pr(coin landed heads) = 1⁄3 in the sleeping beauty problem, where on this notation P(coin landed heads) = 1⁄2.
On SSA, your place among the observers is in effect generated first by randomizing among the worlds according to your prior and then by randomizing among the observers in the chosen world. So Pr(A)=0.5, and
Pr(1|A) = 1⁄2 + 1/4*1/2 + 1/4*1/3 = 17⁄24
(since Pr(n=1|A)=1/2, in which case k=1 for sure; Pr(n=2|A)=1/4, in which case k=1 with probability 1⁄2; and Pr(n=3|A)=1/4, in which case k=1 with probability 1⁄3);
Pr(2|A) = 1/4*1/2 + 1/4*1/3 = 5⁄24; and
Pr(3|A) = 1/4*1/3 = 2⁄24.
For simplicity we can focus on the k=2 case, since that’s the case analogous to people like us, in the middle of an extended history. Going through the same calculation for the B worlds gives Pr(2|B) = 63⁄200, so Pr(2) = 0.5*5/24 + 0.5*63/200 = 157⁄600.
So Pr(A|2) = 125⁄314 ≈ 0.4.
On SIA, your place among the observers is generated by randomizing among the observers, giving proportionally more weight to observers in worlds with proportionally higher prior probability, so that the probability of being observer Cnk is
1/12*Pr(Cn) / [sum over possible observers, labeled “Dmj”, of (1/12*Pr(Dm))].
This works out to Pr(2|A) = 2⁄7 [6 possible observers given A, but the one in the n=1 world “counts for double” since that world is twice as likely than the n=2 or =3 worlds a priori];
Pr(A) = 175⁄446 [less than 1⁄2 since there are fewer observers in expectation when the risk of early extinction is higher], and
Pr(2) = 140⁄446, so
Pr(A|2) = 5⁄14 ≈ 0.36.
So in both cases you update on the fact that a supervolcano did not occur at the end of year 1, from assigning probability 0.5 to the event that the underlying risk is 1⁄2 to assigning some lower probability to this event.
But I said that the disagreements canceled out, and here it seems that they don’t cancel out! This is because the anthropic principles disagree about Pr(A|2) for a reason other than the evidence provided by the lack of a supervolcano at the end of year 1: namely the possible existence of year 3. How to update on the fact that you’re in year 2 when you “could have been” in year 3 gets into doomsday argument issues, which the principles do disagree on. I included year 3 in the example because I worried it might seem fishy to make the example all about a 2-period setting where, in period 2, the question is just “what was the underlying probability we would make it here”, with no bearing on what probability we should assign to making it to the next period. But since this is really the example that isolates the anthropic shadow consideration, observe that if we simplify things so that the world lasts at most 2 years (and there 6 possible observers), SSA gives
Pr(2|A) = 1⁄4, Pr(A) = 1⁄2, Pr(2) = 4⁄5 → Pr(A|2) = 5⁄14.
and SIA gives
Pr(2|A) = 1⁄3, Pr(A) = 15⁄34, Pr(2) = 14⁄34 → Pr(A|2) = 5⁄14.
____________________________
An anthropic principle that would assign a different value to Pr(A|2)--for the extreme case of sustaining the “anthropic shadow”, a principle that would assign Pr(A|2)=Pr(A)=1/2--would be one in which your place among the observers is generated by
first randomizing among times k (say, assigning k=1 and k=2 equal probability);
then over worlds with an observer alive at k, maintaining your prior of Pr(A)=1/2;
[and then perhaps over observers at that time, but in this example there is only one].
This is more in the spirit of SSA than SIA, but it is not SSA, and I don’t think anyone endorses it. SSA randomizes over worlds and then over observers within each world, so that observing that you’re late in time is indeed evidence that “most worlds last late”.
Thank you, this is helpful.
Ok great! And sorry the numbers in my example got unwieldy, I just picked some probabilities at the beginning and ran with them, instead of bothering to reverse-engineer something cleaner…
But this example relies on there just being one planet. If there are >1 planets, each with two periods, we are back to having an anthropic shadow again.
Let’s consider the case with 2 planets. Let’s call them x and y.
According to SSA:
Given A, there are 4 different possibilities, each with probability 1/4:
No catastrophe on either planet.
Catastrophe on x.
Catastrophe on y.
Catastrophe on both.
Let’s say you observe yourself to be alive at time-step 2 on planet x.
Pr(x2|A) = 1/4*1/4 + 1/4*0 + 1/4*1/3 + 1/4*0 ~= 0.146
Given B, the probabilities are instead:
No catastrophe on either planet: (9/10)^2
Catastrophe on x: 9/10*1/10
Catastrophe on y: 1/10*9/10
Catastrophe on both: 1/10*1/10
Pr(x2|B) = (9/10)^2*1/4 + 9/10*1/10*0 + 1/10*9/10*1/3 + 1/10*1/10*0 ~= 0.233
Pr(A|x2) = Pr(x2|A)Pr(A)/Pr(x2) = Pr(x2|A)Pr(A)/[Pr(x2|A)*0.5+Pr(x2|B)*0.5)] ~= 0.146*0.5/[0.146*0.5+0.233*0.5] ~= 0.385.
According to SIA:
Here, we can directly compute Pr(A|x2).
All x2 observers are:
Where A is true—Where no catastrophes happen. Probability: 0.5*1/4
Where A is true—Where there’s only a catastrophe on y. Probability: 0.5*1/4
Where B is true—Where no catastrophes happen. Probability: 0.5*(9/10)^2
Where B is true—Where there’s only a catastrophe on y. Probability: 0.5*9/10*1/10
The total sum of x2 measure in worlds where A is true is 0.5*1/4 + 0.5*1/4 = 0.25.
The total sum of x2 measure is 0.5*1/4 + 0.5*1/4 + 0.5*(9/10)^2 + 0.5*9/10*1/10 = 0.7
Pr(A|x2) = 0.25/0.7 ~= 0.357.
The difference would be somewhat larger with >2 planets. (But would never be very large. Unless you changed the SSA reference classes so that you’re e.g. only counting observers at period 2.)
Also: The mechanism of action here is the correlation between there being a survivor alive at x2 and there being a greater number of total observers in your reference class. There are multiple ways to break this:
If you have a universe with both A planets and B planets (i.e. each planet has a 50% probability of being an A planet and a 50% probability of being a B planet) then there will once again not be any difference between SIA and SSA. (Because then there’s no correlation between x2 and the total number of observers.)
Alternatively, if there’s a sufficiently large “background population” of people in your reference class whose size is equally large regardless of whether there’s a survivor at x2, then the correlation between x2 and the total number of observers can become arbitrarily small, and so the difference between SIA and SSA can become arbitrarily small.
Overall: I don’t think SSA-style anthropic shadows of any significant size are real. Because I think SSA is unreasonable, and because I think SSA with small/restrictive reference classes is especially unreasonable. And with large reference classes, it seems unlikely to me that there are large correlations between our possible historic demise and the total number of observers. (For reasons like the above two bullet points.)
Oh, also, re the original paper, I do think that even given SSA, Teru’s argument that Jack and Jill have equivalent epistemic perspectives is correct. (Importantly: As long as Jack and Jill uses the same SSA reference classes, and those reference classes don’t treat Jack and Jill any differently.)
Since the core mechanism in my above comment is the correlation between x2 and the total number of observers, I think Jill the Martian would also arrive at different Pr(A) depending on whether she was using SSA or SIA.
(But Teru doesn’t need to get into any of this, because he effectively rejects SSA towards the end of the section “Barking Dog vs The Martians” (p12-14 of the pdf). Referring to his previous paper Doomsday and objective chances.)
Interesting, thanks for pointing this out! And just to note, that result doesn’t rely on any sort of suspicious knowledge about whether you’re on the planet labeled “x” or “y”; one could also just say “given that you observe that you’re in period 2, …”.
I don’t think it’s right to describe what’s going on here as anthropic shadow though, for the following reason. Let me know what you think.
To make the math easier, let me do what perhaps I should have done from the beginning and have A be the event that the risk is 50% and B be the event that it’s 0%. So in the one-planet case, there are 3 possible worlds:
A1 (prior probability 25%) -- risk is 50%, lasts one period
A2 (prior probability 25%) -- risk is 50%, lasts two periods
B (prior probability 50%) -- risk is 0%, lasts two periods
At time 1, whereas SIA tells us to put credence of 1⁄2 on A, SSA tells us to put something higher--
(0.25 + 0.25/2) / (0.25 + 0.25/2 + 0.5/2) = 3⁄5
--because a higher fraction of expected observers are at period 1 given A than given B. This is the Doomsday Argument. When we reach period 2, both SSA and SIA then tell us to update our credence in A downward. Both principles tell us fully to update downward for the same reasons that we would update downward on the probability of an event that didn’t change the number of observers: e.g. if A is the event you live in a place where the probability of rain per day is 50% and B is the event that it’s 0%; you start out putting credence 50% [or 60%] on A; and you make it to day 2 without rain (and would live to see day 2 either way). But in the catastrophe case SSA further has you update downward because the Doomsday Argument stops applying in period 2.
One way to put the general lesson is that, as time goes on and you learn how many observers there are, SSA has less room to shift probability mass (relative to SIA) toward the worlds where there are fewer observers.
In the case above, once you make it to period 2, that uncertainty is fully resolved: given A or B, you know you’re in a world with 2 observers. This is enough to motivate such a big update according to SSA that at the end the two principles agree on assigning probability 1⁄3 to A.
In cases where uncertainty about the number of observers is only partially resolved in the move from period 1 to period 2--as in my 3-period example, or in your 2-planet example*--then the principles sustain some disagreement in period 2. This is because
SSA started out in period 1 assigning a higher credence to A than SIA;
both recommend updating on the evidence given by survival as you would update on anything else, like lack of rain;
SSA further updates downward because the Doomsday Argument partially loses force; and
the result is that SSA still assigns a higher credence to A than SIA.
*To verify the Doomsday-driven disagreement in period 1 in the two-planet case explicitly (with the simpler definitions of A and B), there are 5 possible worlds:
A1 (prior probability 12.5%) -- risk is 50% per planet, both last one period
A2 (prior probability 12.5%) -- risk is 50% per planet, only x lasts two periods
A3 (prior probability 12.5%) -- risk is 50% per planet, only y lasts two periods
A4 (prior probability 12.5%) -- risk is 50% per planet, both last two periods
B (prior probability 50%) -- risk is 0% per planet, both last two periods
In period 1, SIA gives credence in A of 1⁄2; SSA gives (0.125 + 0.125*2/3 + 0.125*2/3 + 0.125/2) / (0.125 + 0.125*2/3 + 0.125*2/3 + 0.125/2 + 0.5/2) = 5⁄8.
One could use the term “anthropic shadow” to refer to the following fact: As time goes on, in addition to inferring existential risks are unlikely as we would infer that rain is unlikely, SSA further recommends inferring that existential risks are unlikely by giving up the claim that we’re more likely to be in a world with fewer observers; but this second update is attenuated by the (possible) existence of other planets. I don’t have any objection to using the term that way and I do think it’s an interesting point. But I think the old arguments cited in defense of an “anthropic shadow” effect were pretty clearly arguing for the view that we should update less (or even not at all) toward thinking existential risk per unit time is low as time goes on than we would update about the probabilities per unit time of other non-observed events.
Nice, I feel compelled by this.
The main question that remains for me (only paranthetically alluded to in my above comment) is:
Do we get something that deserves to be called an “anthropic shadow” for any particular, more narrow choice of “reference class”, and...
can the original proposes of an “anthropic shadow” be read as proposing that we should work with such reference classes?
I think the answer to the first question is probably “yes” if we look at a reference class that changes over time, something like R_t = “people alive at period t of development in young civilizations’ history”.
I don’t know about the answer to the second question. I think R_t seems like kind of a wild reference class to work with, but I never really understood how reference classes were supposed to be chosen for SSA, so idk what SSA’s proponents thinks is reasonable vs. not.
With some brief searches/skim in the anthropic shadow paper… I don’t think they discuss the topic in enough depth that they can be said to have argued for such a reference class, and it seems like a pretty wild reference class to just assume. (They never mention either the term “reference class” or even any anthropic principles like SSA.)
Ok great!
And ok, I agree that the answer to the first question is probably “yes”, so maybe what I was calling an alternative anthropic principle in my original comment could be framed as SSA with this directly time-centric reference class. If so, instead of saying “that’s not SSA”, I should have said “that’s not SSA with a standard reference class (or a reference class anyone seems to have argued for)”. I agree that Bostrom et al. (2010) don’t seem to argue for such a reference class.
On my reading (and Teru’s, not coincidentally), the core insight Bostrom et al. have (and iterate on) is equivalent to the insight that if you haven’t observed something before, and you assign it a probability per unit of time equal to its past frequency, then you must be underestimating its probability per unit of time. The response isn’t that this is predicated on, or arguing for, any weird view on anthropics, but just that it has nothing to do with anthropics: it’s true, but for the same reason that you’ll underestimate the probability of rain per unit time based on past frequency if it’s never rained (though in the prose they convey their impression that the fact that you wouldn’t exist in the event of a catastrophe is what’s driving the insight). The right thing to do in both cases is to have a prior and update the probability downward as the dry spell lengthens. A nonstandard anthropic principle (or reference class) is just what would be necessary to motivate a fundamental difference from “no rain”.