This is a linkpost for Dispelling the Anthropic Shadow by Teruji Thomas.
Abstract:
There are some possible events that we could not possibly discover in our past. We could not discover an omnicidal catastrophe, an event so destructive that it permanently wiped out life on Earth. Had such a catastrophe occurred, we wouldn’t be here to find out. This space of unobservable histories has been called the anthropic shadow. Several authors claim that the anthropic shadow leads to an ‘observation selection bias’, analogous to survivorship bias, when we use the historical record to estimate catastrophic risks. I argue against this claim.
Upon a first read, I found this paper pretty persuasive; I’m at >80% that I’ll later agree with it entirely, i.e. I’d agree that “the anthropic shadow effect” is not a real thing and earlier arguments in favor of it being a real thing were fatally flawed. This was a significant update for me on the issue.
Anthropic shadow effects are one of the topics discussed loosely in social settings among EAs (and in general open-minded nerdy people), often in a way that assumes the validity of the concept[1]. To the extent that the concept turns out to be completely not a thing — and for conceptual rather than empirical reasons — I’d find that an interesting sociological/cultural fact.
I also found this remarkably clear and definitive—a real update for me, to the point of coming with some actual relief! I’m afraid I wasn’t aware of the existing posts by Toby Crisford and Jessica Taylor.
I suppose if there’s a sociological fact here it’s that EAs and people who are nerdy in similar sorts of ways, myself absolutely included, can be quick to assume a position is true because it sounds reasonable and seemingly thoughtful other people who have thought about the question more have endorsed it. I don’t think this single-handedly demonstrates we’re too quick; not everyone can dig into everything, so at least to some extent it makes sense to specialize and defer despite the fact this is bound to happen now and then.
Of course argument-checking is also something one can specialize in, and one thing about the EA community which I think is uncommon and great is hiring people like Teru to dig into its cultural background assumptions like this...
I’m having trouble understanding this. The part that comes closest to making sense to me is this summary:
Are they just applying https://en.wikipedia.org/wiki/Self-indication_assumption_doomsday_argument_rebuttal to anthropic shadow without using any of the relevant terms, or is it something else I can’t quite get?
Also, how would they respond to the fine-tuning argument? That is, it seems like most planets (let’s say 99.9%) cannot support life (eg because they’re too close to their sun). It seems fantastically surprising that we find ourselves on a planet that does support life, but anthropics provides an easy way out of this apparent coincidence. That is, anthropics tells us that we overestimate the frequency of things that allow us to be alive. This seems like reverse anthropic shadow, where anthropic shadow is underestimating the frequency of things that cause us to be dead. So is the paper claiming that anthropics does change our estimates of the frequency of good things, but can’t change our estimate of the frequency of bad things? Why would this be?
To answer the first question, no, the argument doesn’t rely on SIA. Let me know if the following is helpful.
Suppose your prior (perhaps after studying plate tectonics and so on, but not after considering the length of time that’s passed without an an extinction-inducing supervolcano) is that there’s probability “P(A)”=0.5 that risk of an extinction-inducing supervolcano at the end of each year is 1⁄2 and probability “P(B)”=0.5 that the risk is 1⁄10. Suppose that the world lasts at least 1 year and most 3 years regardless.
Let “A1” be the possible world in which the risk was 1⁄2 per year and we blew up at the end of year 1, “A2” be that in which the risk was 1⁄2 per year and we blew up at the end of year 2, and “A3” be that in which the risk was 1⁄2 per year and we never blew up, so that we got to exist for 3 years. Define B1, B2, B3 likewise for the risk=1/10 worlds.
Suppose there’s one “observer per year” before the extinction event and zero after, and let “Cnk”, with k<=n, be observer #k in world Cn (C can be A or B). So there are 12 possible observers: A11, A21, A22, A31, A32, A33, and likewise for the Bs.
If you are observer Cnk, your evidence is that you are observer #k. The question is what Pr(A|k) is; what probability you should assign to the annual risk being 1⁄2 given your evidence.
Any Bayesian, whether following SIA or SSA (or anything else), agrees that
Pr(A|k) = Pr(k|A)Pr(A)/Pr(k),
where Pr(.) is the credence an observer should have for an event according to a given anthropic principle. The anthropic principles disagree about the values of these credences, but here the disagreements cancel out. Note that we do not necessarily have Pr(A)=P(A): in particular, if the prior P(.) assigns equal probability to two worlds, SIA will recommend assigning higher credence Pr(.) to the one with more observers, e.g. by giving an answer of Pr(coin landed heads) = 1⁄3 in the sleeping beauty problem, where on this notation P(coin landed heads) = 1⁄2.
On SSA, your place among the observers is in effect generated first by randomizing among the worlds according to your prior and then by randomizing among the observers in the chosen world. So Pr(A)=0.5, and
Pr(1|A) = 1⁄2 + 1/4*1/2 + 1/4*1/3 = 17⁄24
(since Pr(n=1|A)=1/2, in which case k=1 for sure; Pr(n=2|A)=1/4, in which case k=1 with probability 1⁄2; and Pr(n=3|A)=1/4, in which case k=1 with probability 1⁄3);
Pr(2|A) = 1/4*1/2 + 1/4*1/3 = 5⁄24; and
Pr(3|A) = 1/4*1/3 = 2⁄24.
For simplicity we can focus on the k=2 case, since that’s the case analogous to people like us, in the middle of an extended history. Going through the same calculation for the B worlds gives Pr(2|B) = 63⁄200, so Pr(2) = 0.5*5/24 + 0.5*63/200 = 157⁄600.
So Pr(A|2) = 125⁄314 ≈ 0.4.
On SIA, your place among the observers is generated by randomizing among the observers, giving proportionally more weight to observers in worlds with proportionally higher prior probability, so that the probability of being observer Cnk is
1/12*Pr(Cn) / [sum over possible observers, labeled “Dmj”, of (1/12*Pr(Dm))].
This works out to Pr(2|A) = 2⁄7 [6 possible observers given A, but the one in the n=1 world “counts for double” since that world is twice as likely than the n=2 or =3 worlds a priori];
Pr(A) = 175⁄446 [less than 1⁄2 since there are fewer observers in expectation when the risk of early extinction is higher], and
Pr(2) = 140⁄446, so
Pr(A|2) = 5⁄14 ≈ 0.36.
So in both cases you update on the fact that a supervolcano did not occur at the end of year 1, from assigning probability 0.5 to the event that the underlying risk is 1⁄2 to assigning some lower probability to this event.
But I said that the disagreements canceled out, and here it seems that they don’t cancel out! This is because the anthropic principles disagree about Pr(A|2) for a reason other than the evidence provided by the lack of a supervolcano at the end of year 1: namely the possible existence of year 3. How to update on the fact that you’re in year 2 when you “could have been” in year 3 gets into doomsday argument issues, which the principles do disagree on. I included year 3 in the example because I worried it might seem fishy to make the example all about a 2-period setting where, in period 2, the question is just “what was the underlying probability we would make it here”, with no bearing on what probability we should assign to making it to the next period. But since this is really the example that isolates the anthropic shadow consideration, observe that if we simplify things so that the world lasts at most 2 years (and there 6 possible observers), SSA gives
Pr(2|A) = 1⁄4, Pr(A) = 1⁄2, Pr(2) = 4⁄5 → Pr(A|2) = 5⁄14.
and SIA gives
Pr(2|A) = 1⁄3, Pr(A) = 15⁄34, Pr(2) = 14⁄34 → Pr(A|2) = 5⁄14.
____________________________
An anthropic principle that would assign a different value to Pr(A|2)--for the extreme case of sustaining the “anthropic shadow”, a principle that would assign Pr(A|2)=Pr(A)=1/2--would be one in which your place among the observers is generated by
first randomizing among times k (say, assigning k=1 and k=2 equal probability);
then over worlds with an observer alive at k, maintaining your prior of Pr(A)=1/2;
[and then perhaps over observers at that time, but in this example there is only one].
This is more in the spirit of SSA than SIA, but it is not SSA, and I don’t think anyone endorses it. SSA randomizes over worlds and then over observers within each world, so that observing that you’re late in time is indeed evidence that “most worlds last late”.
Thank you, this is helpful.
Ok great! And sorry the numbers in my example got unwieldy, I just picked some probabilities at the beginning and ran with them, instead of bothering to reverse-engineer something cleaner…
But this example relies on there just being one planet. If there are >1 planets, each with two periods, we are back to having an anthropic shadow again.
Let’s consider the case with 2 planets. Let’s call them x and y.
According to SSA:
Given A, there are 4 different possibilities, each with probability 1/4:
No catastrophe on either planet.
Catastrophe on x.
Catastrophe on y.
Catastrophe on both.
Let’s say you observe yourself to be alive at time-step 2 on planet x.
Pr(x2|A) = 1/4*1/4 + 1/4*0 + 1/4*1/3 + 1/4*0 ~= 0.146
Given B, the probabilities are instead:
No catastrophe on either planet: (9/10)^2
Catastrophe on x: 9/10*1/10
Catastrophe on y: 1/10*9/10
Catastrophe on both: 1/10*1/10
Pr(x2|B) = (9/10)^2*1/4 + 9/10*1/10*0 + 1/10*9/10*1/3 + 1/10*1/10*0 ~= 0.233
Pr(A|x2) = Pr(x2|A)Pr(A)/Pr(x2) = Pr(x2|A)Pr(A)/[Pr(x2|A)*0.5+Pr(x2|B)*0.5)] ~= 0.146*0.5/[0.146*0.5+0.233*0.5] ~= 0.385.
According to SIA:
Here, we can directly compute Pr(A|x2).
All x2 observers are:
Where A is true—Where no catastrophes happen. Probability: 0.5*1/4
Where A is true—Where there’s only a catastrophe on y. Probability: 0.5*1/4
Where B is true—Where no catastrophes happen. Probability: 0.5*(9/10)^2
Where B is true—Where there’s only a catastrophe on y. Probability: 0.5*9/10*1/10
The total sum of x2 measure in worlds where A is true is 0.5*1/4 + 0.5*1/4 = 0.25.
The total sum of x2 measure is 0.5*1/4 + 0.5*1/4 + 0.5*(9/10)^2 + 0.5*9/10*1/10 = 0.7
Pr(A|x2) = 0.25/0.7 ~= 0.357.
The difference would be somewhat larger with >2 planets. (But would never be very large. Unless you changed the SSA reference classes so that you’re e.g. only counting observers at period 2.)
Also: The mechanism of action here is the correlation between there being a survivor alive at x2 and there being a greater number of total observers in your reference class. There are multiple ways to break this:
If you have a universe with both A planets and B planets (i.e. each planet has a 50% probability of being an A planet and a 50% probability of being a B planet) then there will once again not be any difference between SIA and SSA. (Because then there’s no correlation between x2 and the total number of observers.)
Alternatively, if there’s a sufficiently large “background population” of people in your reference class whose size is equally large regardless of whether there’s a survivor at x2, then the correlation between x2 and the total number of observers can become arbitrarily small, and so the difference between SIA and SSA can become arbitrarily small.
Overall: I don’t think SSA-style anthropic shadows of any significant size are real. Because I think SSA is unreasonable, and because I think SSA with small/restrictive reference classes is especially unreasonable. And with large reference classes, it seems unlikely to me that there are large correlations between our possible historic demise and the total number of observers. (For reasons like the above two bullet points.)
Oh, also, re the original paper, I do think that even given SSA, Teru’s argument that Jack and Jill have equivalent epistemic perspectives is correct. (Importantly: As long as Jack and Jill uses the same SSA reference classes, and those reference classes don’t treat Jack and Jill any differently.)
Since the core mechanism in my above comment is the correlation between x2 and the total number of observers, I think Jill the Martian would also arrive at different Pr(A) depending on whether she was using SSA or SIA.
(But Teru doesn’t need to get into any of this, because he effectively rejects SSA towards the end of the section “Barking Dog vs The Martians” (p12-14 of the pdf). Referring to his previous paper Doomsday and objective chances.)
Interesting, thanks for pointing this out! And just to note, that result doesn’t rely on any sort of suspicious knowledge about whether you’re on the planet labeled “x” or “y”; one could also just say “given that you observe that you’re in period 2, …”.
I don’t think it’s right to describe what’s going on here as anthropic shadow though, for the following reason. Let me know what you think.
To make the math easier, let me do what perhaps I should have done from the beginning and have A be the event that the risk is 50% and B be the event that it’s 0%. So in the one-planet case, there are 3 possible worlds:
A1 (prior probability 25%) -- risk is 50%, lasts one period
A2 (prior probability 25%) -- risk is 50%, lasts two periods
B (prior probability 50%) -- risk is 0%, lasts two periods
At time 1, whereas SIA tells us to put credence of 1⁄2 on A, SSA tells us to put something higher--
(0.25 + 0.25/2) / (0.25 + 0.25/2 + 0.5/2) = 3⁄5
--because a higher fraction of expected observers are at period 1 given A than given B. This is the Doomsday Argument. When we reach period 2, both SSA and SIA then tell us to update our credence in A downward. Both principles tell us fully to update downward for the same reasons that we would update downward on the probability of an event that didn’t change the number of observers: e.g. if A is the event you live in a place where the probability of rain per day is 50% and B is the event that it’s 0%; you start out putting credence 50% [or 60%] on A; and you make it to day 2 without rain (and would live to see day 2 either way). But in the catastrophe case SSA further has you update downward because the Doomsday Argument stops applying in period 2.
One way to put the general lesson is that, as time goes on and you learn how many observers there are, SSA has less room to shift probability mass (relative to SIA) toward the worlds where there are fewer observers.
In the case above, once you make it to period 2, that uncertainty is fully resolved: given A or B, you know you’re in a world with 2 observers. This is enough to motivate such a big update according to SSA that at the end the two principles agree on assigning probability 1⁄3 to A.
In cases where uncertainty about the number of observers is only partially resolved in the move from period 1 to period 2--as in my 3-period example, or in your 2-planet example*--then the principles sustain some disagreement in period 2. This is because
SSA started out in period 1 assigning a higher credence to A than SIA;
both recommend updating on the evidence given by survival as you would update on anything else, like lack of rain;
SSA further updates downward because the Doomsday Argument partially loses force; and
the result is that SSA still assigns a higher credence to A than SIA.
*To verify the Doomsday-driven disagreement in period 1 in the two-planet case explicitly (with the simpler definitions of A and B), there are 5 possible worlds:
A1 (prior probability 12.5%) -- risk is 50% per planet, both last one period
A2 (prior probability 12.5%) -- risk is 50% per planet, only x lasts two periods
A3 (prior probability 12.5%) -- risk is 50% per planet, only y lasts two periods
A4 (prior probability 12.5%) -- risk is 50% per planet, both last two periods
B (prior probability 50%) -- risk is 0% per planet, both last two periods
In period 1, SIA gives credence in A of 1⁄2; SSA gives (0.125 + 0.125*2/3 + 0.125*2/3 + 0.125/2) / (0.125 + 0.125*2/3 + 0.125*2/3 + 0.125/2 + 0.5/2) = 5⁄8.
One could use the term “anthropic shadow” to refer to the following fact: As time goes on, in addition to inferring existential risks are unlikely as we would infer that rain is unlikely, SSA further recommends inferring that existential risks are unlikely by giving up the claim that we’re more likely to be in a world with fewer observers; but this second update is attenuated by the (possible) existence of other planets. I don’t have any objection to using the term that way and I do think it’s an interesting point. But I think the old arguments cited in defense of an “anthropic shadow” effect were pretty clearly arguing for the view that we should update less (or even not at all) toward thinking existential risk per unit time is low as time goes on than we would update about the probabilities per unit time of other non-observed events.
Nice, I feel compelled by this.
The main question that remains for me (only paranthetically alluded to in my above comment) is:
Do we get something that deserves to be called an “anthropic shadow” for any particular, more narrow choice of “reference class”, and...
can the original proposes of an “anthropic shadow” be read as proposing that we should work with such reference classes?
I think the answer to the first question is probably “yes” if we look at a reference class that changes over time, something like R_t = “people alive at period t of development in young civilizations’ history”.
I don’t know about the answer to the second question. I think R_t seems like kind of a wild reference class to work with, but I never really understood how reference classes were supposed to be chosen for SSA, so idk what SSA’s proponents thinks is reasonable vs. not.
With some brief searches/skim in the anthropic shadow paper… I don’t think they discuss the topic in enough depth that they can be said to have argued for such a reference class, and it seems like a pretty wild reference class to just assume. (They never mention either the term “reference class” or even any anthropic principles like SSA.)
Ok great!
And ok, I agree that the answer to the first question is probably “yes”, so maybe what I was calling an alternative anthropic principle in my original comment could be framed as SSA with this directly time-centric reference class. If so, instead of saying “that’s not SSA”, I should have said “that’s not SSA with a standard reference class (or a reference class anyone seems to have argued for)”. I agree that Bostrom et al. (2010) don’t seem to argue for such a reference class.
On my reading (and Teru’s, not coincidentally), the core insight Bostrom et al. have (and iterate on) is equivalent to the insight that if you haven’t observed something before, and you assign it a probability per unit of time equal to its past frequency, then you must be underestimating its probability per unit of time. The response isn’t that this is predicated on, or arguing for, any weird view on anthropics, but just that it has nothing to do with anthropics: it’s true, but for the same reason that you’ll underestimate the probability of rain per unit time based on past frequency if it’s never rained (though in the prose they convey their impression that the fact that you wouldn’t exist in the event of a catastrophe is what’s driving the insight). The right thing to do in both cases is to have a prior and update the probability downward as the dry spell lengthens. A nonstandard anthropic principle (or reference class) is just what would be necessary to motivate a fundamental difference from “no rain”.
I’m not sure I understand the second question. I would have thought both updates are in the same direction: the fact that we’ve survived on Earth a long time tells us that this is a planet hospitable to life, both in terms of its life-friendly atmosphere/etc and in terms of the rarity of supervolcanoes.
We can say, on anthropic grounds, that it would be confused to think other planets are hospitable on the basis of Earth’s long and growing track record. But as time goes on, we get more evidence that we really are on a life-friendly planet, and haven’t just had a long string of luck on a life-hostile planet.
The anthropic shadow argument was an argument along the lines, “no, we shouldn’t get ever more convinced we’re on a life-friendly planet over time (just on the evidence that we’re still around). It is actually plausible that we’ve just had a lucky streak that’s about to break—and this lack of update is in some way because no one is around to observe anything in the worlds that blow up”.
Habryka referred me to https://forum.effectivealtruism.org/posts/A47EWTS6oBKLqxBpw/against-anthropic-shadow , whose “Possible Solution 2” is what I was thinking of. It looks like anthropic shadow holds if you think there are many planets (which seems true) and you are willing to accept weird things about reference classes (which seems like the price of admissions to anthropics). I appreciate the paper you linked for helping me distinguish between the claim that anthropic shadow is transparently true without weird assumptions, vs. the weaker claim in Possible Solution 2 that it might be true with about as much weirdness as all the other anthropic paradoxes.
Eli Rose was the one who linked to it, to give credit where it’s due : )
I agree that those are different claims, but I expect the weaker claim is also not true, for whatever that’s worth. The claim in Toby Crisford’s Possible Solution 2, as I understand it, is the same as the claim I was making at the end of my long comment: that one could construct some anthropic principle according to which the anthropic shadow argument would be justified. But that principle would have to be different from SSA and SIA; I’m pretty sure it would have to be something which no one has argued for; and my guess is that on thinking about it further most people would consider any principle that fits the bill to have significantly weirder implications than either SSA or SIA.
I no longer endorse this, see reply below:
I don’t think this does away with the problem, because for decision making purposes the fact that a random event is extinction-causing or not is still relevant (thinking of the Supervolcano vs Martians case in the paper). I didn’t see this addressed in the paper. Here’s a scenario that hopefully illustrates the issue:
A game is set up where a ball will be drawn from a jar. If it comes out red then “extinction” occurs, the player loses immediately. If it comes out green then “survival” occurs, and the player continues to the next round. This is repeated (with the ball replaced every time) for an unknown number of rounds with the player unable to do anything.
Eventually, the game master decides to stop (for their own unknowable reasons), and offers the player two options:
Play one more round of drawing the ball from the jar and risking extinction if it comes out red
Take a fixed 10% chance of extinction
If they get through this round then they win the game.
The game is played in two formats:
Jack is offered the game as described above, where he can lose before getting to the decision point
Jill is offered a game where rounds before the decision point don’t count, she can observe the colour of the ball but doesn’t risk extinction. Only on the final round does she risk extinction
Let’s say they both start with a prior that P(red) is 15%, and that the actual P(red) is 20%. Should they adopt different strategies?
The answer is yes:
For Jack, he will only end up at the decision point if he observes 0 red balls. Assuming a large number of rounds are played, if he naively applies Bayesian reasoning he will conclude P(red) is very close to 0 and choose option 1 (another round of picking a ball). This is clearly irrational, because it will always result in option 1 being chosen regardless of the true probability and of his prior[1]. A better strategy is to stick with his prior if it is at all informative
For Jill, she will end up at the decision point regardless of whether she sees a red ball. Assuming a large number of practice rounds are played, in almost all worlds applying naive Bayesian reasoning will tell her P(red) is close to 20%, and she should pick option 2. In this case the decision is sensitive to the true probability, and she only loses out in the small proportion of worlds where she observes an unusually low number of red balls, so the naive Bayesian strategy seems rational
The point is that the population of Jacks that get the opportunity to make the decision is selected to be only those that receive evidence that imply a low probability, and this systematically biases the decision in a way that is predictable beforehand (such that having the information that this selection effect exists can change your optimal decision).
I think this is essentially the same objection raised by quila below, and is in the same vein as Jonas Moss’s comment on Toby’s post (I’m not 100% sure of this, I’m more confident that the above objection is basically right than that it’s the same as these two others).
It’s quite possible I’m missing something in the paper, since I didn’t read it in that much detail and other people seem convinced by it. But I didn’t see anything that would make a difference for this basic case of an error in decision making being caused by the anthropic shadow (and particularly I didn’t see how observing a larger number of rounds makes a difference).
A way to see that this is common-sense irrational is to suppose it’s a coin flip instead of a ball being drawn, where it’s very hard to imagine how you could physically bias a coin to 99% heads, so you would have a very strong prior against that. In this case if you saw 30 heads in a row (and you could see that it wasn’t a two-headed coin) it would still seem stupid to take the risk of getting tails on the next round
Under typical decision theory, your decisions are a product of your beliefs and by the utilities that you assign to different outcomes. In order to argue that Jack and Jill ought to be making different decisions here, it seems that you must either:
Dispute the paper’s claim that Jack and Jill ought to assign the same probabilities in the above type of situations.
Be arguing that Jack and Jill ought to be making their decisions differently despite having identical preferences about the next round and identical beliefs about the likelihood that a ball will turn out to be red.
Are you advancing one of these claims? If (1), I think you’re directly disagreeing with the paper for reasons that don’t just come down to how to approach decision making. If (2), maybe say more about why you propose Jack and Jill make different decisions despite having identical beliefs and preferences?
I thought about it more, and I am now convinced that the paper is right (at least in the specific example I proposed).
The thing I didn’t get at first is that given a certain prior over P(extinction), and a number of iterations survived, there are “more surviving worlds” where the actual P(extinction) is low relative to your initial prior, and that this is exactly accounted for by the Bayes factor.
I also wrote a script that simulates the example I proposed, and am convinced that the naive Bayes approach does in fact give the best strategy in Jack’s case too (I haven’t proved that there isn’t a counterexample, but was convinced by fiddling with the parameters around the boundary of cases where always-option-1 dominates vs always-option-2).
Thanks, this has actually updated me a lot :)
Is this article different than these other existing critiques of Anthropic shadow arguments?
https://forum.effectivealtruism.org/posts/A47EWTS6oBKLqxBpw/against-anthropic-shadow
https://www.lesswrong.com/posts/EScmxJAHeJY5cjzAj/ssa-rejects-anthropic-shadow-too
They both are a lot clearer to me (in-particular, they both make their assumptions about anthropic reasoning explicit), though that might just be a preference for the LessWrong/EA Forum style instead of academic philosophy.
I haven’t digested the full paper yet, but based on the summary pasted below, this is precisely the claim I was trying to argue for in the “Against Anthropic Shadow” post of mine that you have linked.
It looks like this claim has been fleshed out in a lot more detail here though, and I’m looking forward to reading it properly!
In the post you linked I also went on quite a long digression trying to figure out if it was possible to rescue Anthropic Shadow by appealing to the fact that there might be large numbers of other worlds containing life (this plausibly weakens the strength of evidence provided by A, which may then stop the cancellation in C). I decided it technically was possible, but only if you take a strange approach to anthropic reasoning, with a strange and difficult-to-define observer reference class.
Possibly focusing so much on this digression was a mistake though, since the summary above is really pointing to the important flaw in the original argument!
I can’t believe Toby’s initial post only had 28 Karma, it’s excellent, crazytown....
I think a proper account of this wants to explain why there appear to be arguments which argue for an anthropic shadow effect, and why there appear to be arguments which argue against an anthropic shadow effect, and how to reconcile them.
In my view, Teru Thomas’s paper is the first piece which succeeds in doing that.
(My historical position is like “I always found anthropic shadow arguments fishy, but didn’t bottom that concern out”. I found Toby Crisford’s post helpful in highlighting what might be a reason not to expect anthropic shadow effects, but it left things feeling gnarly so I wasn’t confident in it—again, without investing a great deal of time in trying to straighten it out. I missed Jessica Taylor’s post, but looking at it now I think I would have felt similarly to Toby Crisford’s analysis.)
FWIW, I think it’s rarely a good idea to assume the validity of anything where anthropics plays an important role. Or decision theory (c.f. this). These are very much not settled areas.
This sometimes even applies when it’s not obvious that anthropics is being invoked. I think Dissolving the Fermi Paradox and Grabby aliens both rely on pretty strong assumption about anthropics that are easy for readers to miss. (Tristan Cook does a good job of making the anthropics explicit, and exploring a wide range, in this post.)
summary of my comment: in anthropics problems where {what your meaning of ‘probability’ yields} is unclear, instead of focusing on ‘what the probability is’, focus on fully comprehending the situation and then derive what action you prefer to take.[1]
imv this is making a common mistake (“conflating logical and indexical uncertainty”). here’s what i think the correct reasoning is. it can be done without ever updating our probability about what the rate is.
i write two versions of this for two cases:
case 1, where something like many-worlds is true, in which case as long as the rate of omnideath events is below ~100%, there will always be observers like you.
case 2, where there is only one small world, in which case if the rate of an omnideath event occurring at least once were high there might be ‘no observers’ or ‘no instantiated copies of you’.
i try to show these are actually symmetrical.
finally i show symmetry to a case 3, where our priors are 50⁄50 between two worlds, in one of which we certainly (rather than probably) would not exist; we do not need to update our priors (in response to our existence), even in that case, to choose what to do.
case 1: where something like many-worlds is true.
we’re uncertain about rate of omnideath events.
simplifying premise: the rate is discretely either high or very low, and we start off with 50% credence on each
we’re instantiated ≥1 times in either case, so our existence does not have any probability of ruling either case out.
we could act as if the rate is low, or act as if it is high. these have different ramifications:
if we act as if the rate is low, this has expected value equal to: prior(low rate) × {value-of-action per world, conditional on low rate} × {amount of worlds we’re in, conditional on low rate}
if we act as if the rate is high, this has expected value equal to: prior(high rate) × {value-of-action per world, conditional on high rate} × {amount of worlds we’re in, conditional on high rate}
(if we assume similar conditional ‘value-of-action per world’ in each), then acting as if the rate is low has higher EV (on those 50⁄50 priors), because the amount of worlds we’re in is much higher if the rate of omnideath events is low.
note that there is no step here where the probability is updated away from 50⁄50.
probability is the name we give to a variable in an algorithm, and this above algorithm goes through all the relevant steps without using that term. to focus on ‘what the probability becomes’ is just a question of definitions: for instance, if defined as {more copies of me in a conditional = (by definition) ‘higher probability’ assigned to that conditional}, then that sense of probability would ‘assign more to a low rate’ (a restatement of the end of step 4).
onto case 2: single world
(i expect the symmetry to be unintuitive. i imagine a reader having had a thought process like this:
“if i imagine a counterfactual where the likelihood of omnideath events per some unit of time is high, then because [in this case 2] there is only one world, it could be the case that my conditionals look like this: ‘there is one copy of me if the rate is low, but none at all if the rate is high’. that would be probabilistic evidence against the possibility it’s high, right? because i’m conditioning on my existence, which i know for sure occurs at least once already.” (and i see this indicated in another commenter’s, “I also went on quite a long digression trying to figure out if it was possible to rescue Anthropic Shadow by appealing to the fact that there might be large numbers of other worlds containing life”)
i don’t know if i can make the symmetry intuitive, so i may have to abandon some readers here.)
prior: uncertain about rate of omnideath events.
(same simplifying premise)
i’m instantiated exactly once.
i’m statistically more probable to be instantiated exactly once if the rate is low.
though, a world with a low rate of omnideath events could still have one happen, in which case i wouldn’t exist conditional on that world.
i’m statistically more probable to be instantiated exactly zero times if the rate is high.
though, a world with a high rate of omnideath events could still have one not happen, in which case i would exist conditional on that world.
(again then going right to the EV calculation):
we could act as if the rate is low, or act as if it is high. these have different ramifications:
if we act as if the rate is low, this has expected value equal to: prior(low rate) × {value-of-action in the sole world, conditional on low rate} × {statistical probability we are alive, conditional on low rate}
if we act as if the rate is high, this has expected value equal to: prior(high rate) × {value-of-action in the sole world, conditional on high rate} × {statistical probability we are alive, conditional on high rate}
(these probabilities are conditional on either rate already being true, i.e p(alive|some-rate), but we’re not updating the probabilities of either rate themselves. this would violate bayes’ formula if we conditioned on ‘alive’, but we’re intentionally avoiding that here.
in a sense we’re avoiding it to rescue the symmetry of these three cases. i suspect {not conditioning on one’s own existence, rather preferring to act as if one does exist per EV calc} could help with other anthropics problems too.)
(if we assume similar conditional ‘value-of-action’ in each sole world), then acting as if the rate is low has higher EV (on those 50⁄50 priors), because ‘statistical probability we are alive’ is much higher if the rate of omnideath events is low.
(i notice i could use the same sentences with only a few changes from case 1, so maybe this symmetry will be intuitive)
(to be clear, in reality our prior might not be 50⁄50 (/fully uncertain); e.g., maybe we’re looking at the equations of physics and we see that it looks like there should be chain reactions which destroy the universe happening frequently, but which are never 100% likely; or maybe we instead see that they suggest world-destroying chain reactions are impossible.)
main takeaway: in anthropics problems where {what your sense of of ‘probability’ yields} is unclear, instead of focusing on ‘what the probability is’, focus on fully comprehending the situation and then derive what action you prefer to take.
(case 3) i’ll close with a further, more fundamental-feeling insight: this symmetry holds even when we consider a rate of 100% rather than just ‘high’. as in, you can run that through this same structure and it will give you the correct output.
the human worded version of that looks like, “i am a fixed mathematical function. if there are no copies of me in the world[2], then the output of this very function is irrelevant to what happens in the world. if there are copies of me in the world, then it is relevant / directly corresponds to what effects those copies have. therefore, because i am a mathematical function which cares about what happens in worlds (as opposed to what the very function itself does), i will choose my outputs as if there are copies of me in the world, even though there is a 50% chance that there are none.”
(as in the ‘sleeping beauty problem’, once you have all the relevant variables, you can run simulations / predict what choice you’d prefer to take (such as if offered to bet that the day is tuesday) without ever ‘assigning a probability’)
(note: ‘the world’ is underdefined here, ‘the real world’ is not a fundamental mathematical entity that can be pointed to in any formal systems we know of, though we can fix this by having the paragraph, instead of referring to ‘the world’, refer to some specific function which the one in question is uncertain about whether itself is contained in it.
we seem to conceptualize the ‘world’ we are in as some particular ‘real world’ and not just some abstract mathematical function of infinitely many. i encourage thinking more about this, though it’s unrelated to the main topic of this comment.)
I thought this piece was going to be about the firm Anthropic . . . anyway, interesting subject, carry on!