I’m not seeing the barrier to Person A’s thinking there’s a 1/1000 chance, conditional on reaching the 50th century, of going extinct in that century. We could easily expect to survive 50 centuries at that rate, and then have the risk consistently decay (halving each century, or something like that) beyond that point, right?
If you instead mean to invoke, say, the 50 millionth century, then I’d think it’s crazy on its face to suddenly expect a 1/1000 chance of extinction after surviving so long. That would no longer “seem, on the face of it, credible”.
I was assuming in my example that the “Time of perils” that Person A believes we might be living through is supposed to be over by the 50th century, so that the 50th century is already in the period where extinction risk is supposed to have become very low.
But suppose Person A adopts your alternative probabilities instead. Person A now believes in a 1/1000 chance of going extinct in the 50th century, conditional on reaching it, and then the probability halves in each century after that.
But if that’s what they believe, you can now just run my argument on the 100th century instead. Person A now proposes a probability of ~10^(-18) of going extinct in the 100th century (conditional on reaching it) which seems implausibly overconfident to me on the face of it!
I agree with you, that if we were considering the 50 millionth century, then a probability of 1/1000 would be far too high. I agree that it would be crazy to stipulate a probability for the Nth century that is much higher than 1/N, because surviving N centuries is evidence that typical extinction risk per century is lower than this (except maybe if we were considering centuries close to the time the sun is expected to die..?)
But my point is that in order to get a truly big future, with the kind of stakes that dominate our expected value calculations, then we need the probability of extinction to decay much faster than 1/N. We need the “Time of Perils” hypothesis. It needs to decay exponentially* (something like the halving that you’ve suggested). And before too long that exponential decay is going to lead to implausibly low probabilities of extinction.
*Edit: Actually not too confident on this claim now I think it through some more. Perhaps you can still get a very large future with sub-exponential decay. Maybe this is another way out for Person A in fact!
Having thought this through some more, I’ve realised I’m wrong, sorry!
Person A shouldn’t say that the probability of extinction halves each century, but they can say that it will decay as 1/N, and that will still lead to an enormous future without them ever seeming implausibly overconfident.
A 1/N decay in extinction risk per century (conditional on making it that far) implies a O(1/N) chance of surviving >= N centuries, which implies a O(1/N^2) chance of going extinct in the Nth century (unconditional). Assuming that the value of the future with extinction in the Nth century is at least proportional to N (a modest assumption) then the value of the future is the sum of terms that decay no faster than 1/N, so this sum diverges, and we get a future with infinite expected value.
I think your original argument is right.
I still have separate reservations about allowing small chances of high stakes to infect our decision making like this, but I completely retract my original comment!
I’m not seeing the barrier to Person A’s thinking there’s a 1/1000 chance, conditional on reaching the 50th century, of going extinct in that century. We could easily expect to survive 50 centuries at that rate, and then have the risk consistently decay (halving each century, or something like that) beyond that point, right?
If you instead mean to invoke, say, the 50 millionth century, then I’d think it’s crazy on its face to suddenly expect a 1/1000 chance of extinction after surviving so long. That would no longer “seem, on the face of it, credible”.
Am I missing something?
I was assuming in my example that the “Time of perils” that Person A believes we might be living through is supposed to be over by the 50th century, so that the 50th century is already in the period where extinction risk is supposed to have become very low.
But suppose Person A adopts your alternative probabilities instead. Person A now believes in a 1/1000 chance of going extinct in the 50th century, conditional on reaching it, and then the probability halves in each century after that.
But if that’s what they believe, you can now just run my argument on the 100th century instead. Person A now proposes a probability of ~10^(-18) of going extinct in the 100th century (conditional on reaching it) which seems implausibly overconfident to me on the face of it!
I agree with you, that if we were considering the 50 millionth century, then a probability of 1/1000 would be far too high. I agree that it would be crazy to stipulate a probability for the Nth century that is much higher than 1/N, because surviving N centuries is evidence that typical extinction risk per century is lower than this (except maybe if we were considering centuries close to the time the sun is expected to die..?)
But my point is that in order to get a truly big future, with the kind of stakes that dominate our expected value calculations, then we need the probability of extinction to decay much faster than 1/N. We need the “Time of Perils” hypothesis. It needs to decay exponentially* (something like the halving that you’ve suggested). And before too long that exponential decay is going to lead to implausibly low probabilities of extinction.
*Edit: Actually not too confident on this claim now I think it through some more. Perhaps you can still get a very large future with sub-exponential decay. Maybe this is another way out for Person A in fact!
Having thought this through some more, I’ve realised I’m wrong, sorry!
Person A shouldn’t say that the probability of extinction halves each century, but they can say that it will decay as 1/N, and that will still lead to an enormous future without them ever seeming implausibly overconfident.
A 1/N decay in extinction risk per century (conditional on making it that far) implies a O(1/N) chance of surviving >= N centuries, which implies a O(1/N^2) chance of going extinct in the Nth century (unconditional). Assuming that the value of the future with extinction in the Nth century is at least proportional to N (a modest assumption) then the value of the future is the sum of terms that decay no faster than 1/N, so this sum diverges, and we get a future with infinite expected value.
I think your original argument is right.
I still have separate reservations about allowing small chances of high stakes to infect our decision making like this, but I completely retract my original comment!
Thanks for looking into it more!