[Question] What is the reasoning behind the “anthropic shadow” effect?

Sup­pose that ev­ery mil­lion years on the dot, some catas­trophic event ei­ther hap­pens or does not hap­pen with prob­a­bil­ity P or (1-P) re­spec­tively. Sup­pose that if the event hap­pens at one of these times, it de­stroys all life, per­ma­nently, with prob­a­bil­ity Q. Sup­pose that Q is known, but P is not, and we ini­tially adopt a prior for it which is uniform be­tween 0 and 1.


Given a perfect his­tor­i­cal record of when the event has or has not oc­curred, we could up­date our prior for P based on this ev­i­dence to ob­tain a pos­te­rior for P which will be sharply peaked at (# of times event has oc­curred) /​ (# of times event could have oc­curred). I will re­fer to this as the “naive es­ti­mate”.


In this pa­per, the naive es­ti­mate is ar­gued to be wrong be­cause of an effect called “an­thropic shadow”. In par­tic­u­lar, it is sup­posed to be an un­der­es­ti­mate. My un­der­stand­ing of the ar­gu­ment is the fol­low­ing: if you pick a fixed value of P and simu­late his­tory a large num­ber of times, then in the cases where an ob­server like us evolves, the ob­server’s calcu­la­tion of (# of times event has oc­curred) /​ (# of times event could have oc­curred) will on av­er­age be sig­nifi­cantly be­low the true value of P. This is be­cause ob­servers are more likely to evolve af­ter pe­ri­ods of un­usu­ally low catas­trophic ac­tivity. In mak­ing this ar­gu­ment, they take a fre­quen­tist ap­proach for the es­ti­ma­tion of P (P is taken to be a fixed un­known pa­ram­e­ter rather than a ran­dom vari­able with some prior dis­tri­bu­tion), but my un­der­stand­ing is that a fully bayesian ap­proach would also be sup­posed to differ from the naive es­ti­mate of the pre­vi­ous para­graph.

But con­sider an analo­gous non-an­thropic sce­nario. Sup­pose we flip a bi­ased coin a hun­dred times, which lands heads with prob­a­bil­ity P (un­known). When­ever this coin lands heads, we im­me­di­ately flip a sec­ond bi­ased coin which lands heads with prob­a­bil­ity Q (known). If we ever get two heads, one from each coin, we paint a blue state marker red, and it re­mains red from then on. After the hun­dred tosses of Coin #1, we find that the state marker is blue, and Coin #1 has landed heads N times. How should we es­ti­mate P?


In this sce­nario, it is true that if you run a large num­ber of simu­la­tions at fixed P, and look at the naive es­ti­mate (N/​100) from cases which end blue, they will on av­er­age be be­low the true value of P, for the same rea­sons as the pre­vi­ous sce­nario. Nev­er­the­less, in this sce­nario, I think the naive es­ti­mate is still cor­rect. If N is already given, then the colour of the state marker should give you no ad­di­tional ev­i­dence for the value of P, be­cause the colour only de­pends on P through N. What the simu­la­tion ar­gu­ment misses by work­ing within the blue state out­comes at fixed P is that you are more likely to finish in a blue state when P is smaller.


So the first part of my ques­tion is: What is the differ­ence be­tween the ex­is­tence/​non-ex­is­tence dis­tinc­tion, and the red/​blue dis­tinc­tion, which makes an­thropic shadow hap­pen in the former case but not the lat­ter?


And the sec­ond part is: How can the an­thropic shadow ar­gu­ment be phrased in a fully bayesian way? How should I ob­tain a pos­te­rior for P given some prior, the his­tor­i­cal record, and the fact of my ex­is­tence?

No comments.