Thanks for this very thorough reply. There are so many strands here that I can’t really hope to do justice to them all, but I’ll make a few observations.
1) There are two versions of my argument. The weak/vague one is that a uniform prior is wrong and the real prior should decay over time, such that you can’t make your extreme claim from priors. The strong/precise one is that it should decay as 1/n^2 in line with a version of LLS. The latter is more meant as an illustration. It is my go-to default for things like this, but my main point here is the weaker one. It seems that you agree that it should decay, and that the main question now is whether it does so fast enough to make your prior-based points moot. I’m not quite sure how to resolve that. But I note that from this position, we can’t reach either your argument that from priors this is way too unlikely for our evidence to overturn (and we also can’t reach my statement of the opposite of that).
2) I wouldn’t use the LLS prior for arbitrary superlative properties where you fix the total population. I’d use it only if the population over time was radically unknown (so that the first person is much more likely to be strongest than the thousandth, because there probably won’t be a thousand) or where there is a strong time dependency such that it happening at one time rules out later times.
3) You are right that I am appealing to some structural properties beyond mere superlatives, such as extinction or other permanent lock-in. This is because these things happening in a century would be sufficient for that century to have a decent chance of being the most influential (technically this still depends on the influenceability of the event, but I think most people would grant that conditional on next century being the end of humanity, it is no longer surprising at all if this or next century were the most influential). So I think that your prior setting approach proves too much, telling us that there is almost no chance of extinction or permanent lock-in next century (and even after updating on evidence). This feels fishy. A bit like Bostrom’s ‘presumptuous philosopher’ example. I think it looks even more fishy in your worked example where the prior is low precisely because of an assumption about how long we will last without extinction: especially as that assumption is compatible with, say, a 50% chance of extinction in the next century. (I don’t think this is a knockdown blow here: but I’m trying to indicate the part of your argument I think would be most likely to fall and roughly why).
4) I agree there is an issue to do with too many hypotheses . And a related issue with what is the first timescale on which to apply a 1⁄2 chance of the event occurring. I think these can be dealt with together. You modify the raw LLS prior by some other kind of prior you have for each particular type of event (which you need to have since some are sub-events of others and rationality requires you to assign lower probability to them). You could operationalise this by asking over what time frame you’d expect a 1⁄2 chance of that event occurring. Then LLS isn’t acting as an indifference principle, but rather just as a way of keeping track of how to update your ur prior in light of how many time periods have elapsed without the event occurring. I think this should work out somewhat similarly, just with a stretched PDF that still decays as 1/n^2, but am not sure. There may be a literature on this.
Thanks for this very thorough reply. There are so many strands here that I can’t really hope to do justice to them all, but I’ll make a few observations.
1) There are two versions of my argument. The weak/vague one is that a uniform prior is wrong and the real prior should decay over time, such that you can’t make your extreme claim from priors. The strong/precise one is that it should decay as 1/n^2 in line with a version of LLS. The latter is more meant as an illustration. It is my go-to default for things like this, but my main point here is the weaker one. It seems that you agree that it should decay, and that the main question now is whether it does so fast enough to make your prior-based points moot. I’m not quite sure how to resolve that. But I note that from this position, we can’t reach either your argument that from priors this is way too unlikely for our evidence to overturn (and we also can’t reach my statement of the opposite of that).
2) I wouldn’t use the LLS prior for arbitrary superlative properties where you fix the total population. I’d use it only if the population over time was radically unknown (so that the first person is much more likely to be strongest than the thousandth, because there probably won’t be a thousand) or where there is a strong time dependency such that it happening at one time rules out later times.
3) You are right that I am appealing to some structural properties beyond mere superlatives, such as extinction or other permanent lock-in. This is because these things happening in a century would be sufficient for that century to have a decent chance of being the most influential (technically this still depends on the influenceability of the event, but I think most people would grant that conditional on next century being the end of humanity, it is no longer surprising at all if this or next century were the most influential). So I think that your prior setting approach proves too much, telling us that there is almost no chance of extinction or permanent lock-in next century (and even after updating on evidence). This feels fishy. A bit like Bostrom’s ‘presumptuous philosopher’ example. I think it looks even more fishy in your worked example where the prior is low precisely because of an assumption about how long we will last without extinction: especially as that assumption is compatible with, say, a 50% chance of extinction in the next century. (I don’t think this is a knockdown blow here: but I’m trying to indicate the part of your argument I think would be most likely to fall and roughly why).
4) I agree there is an issue to do with too many hypotheses . And a related issue with what is the first timescale on which to apply a 1⁄2 chance of the event occurring. I think these can be dealt with together. You modify the raw LLS prior by some other kind of prior you have for each particular type of event (which you need to have since some are sub-events of others and rationality requires you to assign lower probability to them). You could operationalise this by asking over what time frame you’d expect a 1⁄2 chance of that event occurring. Then LLS isn’t acting as an indifference principle, but rather just as a way of keeping track of how to update your ur prior in light of how many time periods have elapsed without the event occurring. I think this should work out somewhat similarly, just with a stretched PDF that still decays as 1/n^2, but am not sure. There may be a literature on this.