Having thought this through some more, I’ve realised I’m wrong, sorry!
Person A shouldn’t say that the probability of extinction halves each century, but they can say that it will decay as 1/​N, and that will still lead to an enormous future without them ever seeming implausibly overconfident.
A 1/​N decay in extinction risk per century (conditional on making it that far) implies a O(1/​N) chance of surviving >= N centuries, which implies a O(1/​N^2) chance of going extinct in the Nth century (unconditional). Assuming that the value of the future with extinction in the Nth century is at least proportional to N (a modest assumption) then the value of the future is the sum of terms that decay no faster than 1/​N, so this sum diverges, and we get a future with infinite expected value.
I think your original argument is right.
I still have separate reservations about allowing small chances of high stakes to infect our decision making like this, but I completely retract my original comment!
Having thought this through some more, I’ve realised I’m wrong, sorry!
Person A shouldn’t say that the probability of extinction halves each century, but they can say that it will decay as 1/​N, and that will still lead to an enormous future without them ever seeming implausibly overconfident.
A 1/​N decay in extinction risk per century (conditional on making it that far) implies a O(1/​N) chance of surviving >= N centuries, which implies a O(1/​N^2) chance of going extinct in the Nth century (unconditional). Assuming that the value of the future with extinction in the Nth century is at least proportional to N (a modest assumption) then the value of the future is the sum of terms that decay no faster than 1/​N, so this sum diverges, and we get a future with infinite expected value.
I think your original argument is right.
I still have separate reservations about allowing small chances of high stakes to infect our decision making like this, but I completely retract my original comment!
Thanks for looking into it more!