Edit: I no longer endorse this any more. The important point I was missing was that Person A’s probability of extinction per century only needs to decay as 1/N in order for the value of the future to remain enormous, and a 1/N decay is not implausibly overconfident.
You are saying that we do not need to assign high probability to the “time of perils” hypothesis in order to get high stakes. We only need to assign it non-vanishing probability. And assigning it vanishing probability would appear to be implausibly overconfident.
But I’m not sure this works, because I think it is impossible to avoid assigning vanishingly small probability to some outcome. If you just frame the question differently, the position that appears to be the overconfident one can be reversed.
Suppose you ask two people what credence they each have in the “time of perils” hypothesis. Person A replies with 10%, and Person B replies with 10^(-20). Person B sounds wildly overconfident.
But now ask each of them what the probability is that humanity (or humanity’s descendants/creations) will go extinct in the 50th century, conditional on surviving until that point. Person B may respond in many different ways. Maybe they say 1/1000. But Person A is now committed to giving a vanishingly small answer to this question, in order to be consistent with their 10% credence in “time of perils”. Now it is Person A who sounds overconfident!
Person A is committed to this, because Person A places a non-vanishing probability on the future being very large. But the probability of making it to the far future is just the product of the probabilities of making it through each century along the way (conditional on surviving the centuries prior). For there to be a non-vanishing probability of a large future, most of these probabilities must be extremely close to 1. Does that not also seem overconfident?
I don’t think this example tells us which of Person A or Person B are doing the right thing, but I think it shows that we can’t decide between them with an argument of the form: “this person’s view is implausible because they are assigning vanishingly small probability to something that seems, on the face of it, credible”.
I’m not seeing the barrier to Person A’s thinking there’s a 1/1000 chance, conditional on reaching the 50th century, of going extinct in that century. We could easily expect to survive 50 centuries at that rate, and then have the risk consistently decay (halving each century, or something like that) beyond that point, right?
If you instead mean to invoke, say, the 50 millionth century, then I’d think it’s crazy on its face to suddenly expect a 1/1000 chance of extinction after surviving so long. That would no longer “seem, on the face of it, credible”.
I was assuming in my example that the “Time of perils” that Person A believes we might be living through is supposed to be over by the 50th century, so that the 50th century is already in the period where extinction risk is supposed to have become very low.
But suppose Person A adopts your alternative probabilities instead. Person A now believes in a 1/1000 chance of going extinct in the 50th century, conditional on reaching it, and then the probability halves in each century after that.
But if that’s what they believe, you can now just run my argument on the 100th century instead. Person A now proposes a probability of ~10^(-18) of going extinct in the 100th century (conditional on reaching it) which seems implausibly overconfident to me on the face of it!
I agree with you, that if we were considering the 50 millionth century, then a probability of 1/1000 would be far too high. I agree that it would be crazy to stipulate a probability for the Nth century that is much higher than 1/N, because surviving N centuries is evidence that typical extinction risk per century is lower than this (except maybe if we were considering centuries close to the time the sun is expected to die..?)
But my point is that in order to get a truly big future, with the kind of stakes that dominate our expected value calculations, then we need the probability of extinction to decay much faster than 1/N. We need the “Time of Perils” hypothesis. It needs to decay exponentially* (something like the halving that you’ve suggested). And before too long that exponential decay is going to lead to implausibly low probabilities of extinction.
*Edit: Actually not too confident on this claim now I think it through some more. Perhaps you can still get a very large future with sub-exponential decay. Maybe this is another way out for Person A in fact!
Having thought this through some more, I’ve realised I’m wrong, sorry!
Person A shouldn’t say that the probability of extinction halves each century, but they can say that it will decay as 1/N, and that will still lead to an enormous future without them ever seeming implausibly overconfident.
A 1/N decay in extinction risk per century (conditional on making it that far) implies a O(1/N) chance of surviving >= N centuries, which implies a O(1/N^2) chance of going extinct in the Nth century (unconditional). Assuming that the value of the future with extinction in the Nth century is at least proportional to N (a modest assumption) then the value of the future is the sum of terms that decay no faster than 1/N, so this sum diverges, and we get a future with infinite expected value.
I think your original argument is right.
I still have separate reservations about allowing small chances of high stakes to infect our decision making like this, but I completely retract my original comment!
Edit: I no longer endorse this any more. The important point I was missing was that Person A’s probability of extinction per century only needs to decay as 1/N in order for the value of the future to remain enormous, and a 1/N decay is not implausibly overconfident.
You are saying that we do not need to assign high probability to the “time of perils” hypothesis in order to get high stakes. We only need to assign it non-vanishing probability. And assigning it vanishing probability would appear to be implausibly overconfident.
But I’m not sure this works, because I think it is impossible to avoid assigning vanishingly small probability to some outcome. If you just frame the question differently, the position that appears to be the overconfident one can be reversed.
Suppose you ask two people what credence they each have in the “time of perils” hypothesis. Person A replies with 10%, and Person B replies with 10^(-20). Person B sounds wildly overconfident.
But now ask each of them what the probability is that humanity (or humanity’s descendants/creations) will go extinct in the 50th century, conditional on surviving until that point. Person B may respond in many different ways. Maybe they say 1/1000. But Person A is now committed to giving a vanishingly small answer to this question, in order to be consistent with their 10% credence in “time of perils”. Now it is Person A who sounds overconfident!
Person A is committed to this, because Person A places a non-vanishing probability on the future being very large. But the probability of making it to the far future is just the product of the probabilities of making it through each century along the way (conditional on surviving the centuries prior). For there to be a non-vanishing probability of a large future, most of these probabilities must be extremely close to 1. Does that not also seem overconfident?
I don’t think this example tells us which of Person A or Person B are doing the right thing, but I think it shows that we can’t decide between them with an argument of the form: “this person’s view is implausible because they are assigning vanishingly small probability to something that seems, on the face of it, credible”.
I’m not seeing the barrier to Person A’s thinking there’s a 1/1000 chance, conditional on reaching the 50th century, of going extinct in that century. We could easily expect to survive 50 centuries at that rate, and then have the risk consistently decay (halving each century, or something like that) beyond that point, right?
If you instead mean to invoke, say, the 50 millionth century, then I’d think it’s crazy on its face to suddenly expect a 1/1000 chance of extinction after surviving so long. That would no longer “seem, on the face of it, credible”.
Am I missing something?
I was assuming in my example that the “Time of perils” that Person A believes we might be living through is supposed to be over by the 50th century, so that the 50th century is already in the period where extinction risk is supposed to have become very low.
But suppose Person A adopts your alternative probabilities instead. Person A now believes in a 1/1000 chance of going extinct in the 50th century, conditional on reaching it, and then the probability halves in each century after that.
But if that’s what they believe, you can now just run my argument on the 100th century instead. Person A now proposes a probability of ~10^(-18) of going extinct in the 100th century (conditional on reaching it) which seems implausibly overconfident to me on the face of it!
I agree with you, that if we were considering the 50 millionth century, then a probability of 1/1000 would be far too high. I agree that it would be crazy to stipulate a probability for the Nth century that is much higher than 1/N, because surviving N centuries is evidence that typical extinction risk per century is lower than this (except maybe if we were considering centuries close to the time the sun is expected to die..?)
But my point is that in order to get a truly big future, with the kind of stakes that dominate our expected value calculations, then we need the probability of extinction to decay much faster than 1/N. We need the “Time of Perils” hypothesis. It needs to decay exponentially* (something like the halving that you’ve suggested). And before too long that exponential decay is going to lead to implausibly low probabilities of extinction.
*Edit: Actually not too confident on this claim now I think it through some more. Perhaps you can still get a very large future with sub-exponential decay. Maybe this is another way out for Person A in fact!
Having thought this through some more, I’ve realised I’m wrong, sorry!
Person A shouldn’t say that the probability of extinction halves each century, but they can say that it will decay as 1/N, and that will still lead to an enormous future without them ever seeming implausibly overconfident.
A 1/N decay in extinction risk per century (conditional on making it that far) implies a O(1/N) chance of surviving >= N centuries, which implies a O(1/N^2) chance of going extinct in the Nth century (unconditional). Assuming that the value of the future with extinction in the Nth century is at least proportional to N (a modest assumption) then the value of the future is the sum of terms that decay no faster than 1/N, so this sum diverges, and we get a future with infinite expected value.
I think your original argument is right.
I still have separate reservations about allowing small chances of high stakes to infect our decision making like this, but I completely retract my original comment!
Thanks for looking into it more!