I don’t have time to get into all the details, but I think that while your intuition is reasonable (I used to share it) the maths does actually turn out my way. At least on one interpretation of what you mean. I looked into this when wondering if the doomsday argument suggested that the EV of the future must be small. Try writing out the algebra for a Gott style prior that there is an x% chance we are in the first x%, for all x. You get a Pareto distribution that is a power law with infinite mean. While there is very little chance on this prior that there is a big future ahead, the size of each possible future compensates for that, such that each order of magnitude of increasing size of the future contributes an equal expected amount of population to the future, such that the sum is infinite.
I’m not quite sure what to make of this, and it may be quite brittle (e.g. if we were somehow certain that there weren’t more than 10^100 people in the future, the expected population wouldn’t be all that high), but as a raw prior I really think it is both an extreme outside view, saying we are equally likely to live at any relative position in the sequence *and* that there is extremely high (infinite) EV in the future—not because it thinks there is any single future whose EV is high, but because the series diverges.
This isn’t quite the same as your claim (about influence), but does seem to ‘save existential risk work’ from this challenge based on priors (I don’t actually think it needed saving, but that is another story).
The diverging series seems to be a version of the St Petersburg paradox, which has fooled me before. In the original version, you have a 2^-k chance of winning 2^k for every positive integer k, which leads to infinite expected payoff. One way in which it’s brittle is that, as you say, the payoff is quite limited if we have some upper bound on the size of the population. Two other mathematical ways are 1) if the payoff is just 1.99^k or 2) if it is 2^0.99k.
On second thoughts, I think it’s worth clarifying that my claim is still true even though yours is important in its own right. On Gott’s reasoning, P(high influence | world has 2^N times the # of people who’ve already lived) is still just 2^-N (that’s 2^-(N-1) if summed over all k>=N). As you said, these tiny probabilities are balanced out by asymptotically infinite impact.
I’ll write up a separate objection to that claim but first a clarifying question: Why do you call Gott’s conditional probability a prior? Isn’t it more of a likelihood? In my model it should be combined with a prior P(number of people the world has). The resulting posterior is then the prior for further enquiries.
I don’t have time to get into all the details, but I think that while your intuition is reasonable (I used to share it) the maths does actually turn out my way. At least on one interpretation of what you mean. I looked into this when wondering if the doomsday argument suggested that the EV of the future must be small. Try writing out the algebra for a Gott style prior that there is an x% chance we are in the first x%, for all x. You get a Pareto distribution that is a power law with infinite mean. While there is very little chance on this prior that there is a big future ahead, the size of each possible future compensates for that, such that each order of magnitude of increasing size of the future contributes an equal expected amount of population to the future, such that the sum is infinite.
I’m not quite sure what to make of this, and it may be quite brittle (e.g. if we were somehow certain that there weren’t more than 10^100 people in the future, the expected population wouldn’t be all that high), but as a raw prior I really think it is both an extreme outside view, saying we are equally likely to live at any relative position in the sequence *and* that there is extremely high (infinite) EV in the future—not because it thinks there is any single future whose EV is high, but because the series diverges.
This isn’t quite the same as your claim (about influence), but does seem to ‘save existential risk work’ from this challenge based on priors (I don’t actually think it needed saving, but that is another story).
Interesting point!
The diverging series seems to be a version of the St Petersburg paradox, which has fooled me before. In the original version, you have a 2^-k chance of winning 2^k for every positive integer k, which leads to infinite expected payoff. One way in which it’s brittle is that, as you say, the payoff is quite limited if we have some upper bound on the size of the population. Two other mathematical ways are 1) if the payoff is just 1.99^k or 2) if it is 2^0.99k.
On second thoughts, I think it’s worth clarifying that my claim is still true even though yours is important in its own right. On Gott’s reasoning, P(high influence | world has 2^N times the # of people who’ve already lived) is still just 2^-N (that’s 2^-(N-1) if summed over all k>=N). As you said, these tiny probabilities are balanced out by asymptotically infinite impact.
I’ll write up a separate objection to that claim but first a clarifying question: Why do you call Gott’s conditional probability a prior? Isn’t it more of a likelihood? In my model it should be combined with a prior P(number of people the world has). The resulting posterior is then the prior for further enquiries.