Quite high. If you think it hasnât happened yet, then this is a problem for my prior that Willâs doesnât have.
More precisely, the argument I sketched gives a prior whose PDF decays roughly as 1/ân^2 (which corresponds to the chance of it first happening in the next period after n absences decaying as ~1/ân). You might be able to get some tweaks to this such that it is less likely than not to happen by now, but I think the cleanest versions predict it would have happened by now. The clean version of Laplaceâs Law of Succession, measured in centuries, says there would only be a 1â2,001 chance it hadnât happened before now, which reflects poorly on the prior, but I donât think it quite serves to rule it out. If you donât know whether it has happened yet (e.g. you are unsure of things like Willâs Axial Age argument), this would give some extra weight to that possibility.
Given this, if one had a hyperprior over different possible Beta distributions, shouldnât 2000 centuries of no event occurring cause one to update quite hard against the (0.5, 0.5) or (1, 1) hyperparameters, and in favour of a prior that was massively skewed towards the per-century probability of no-lock-in-event being very low?
(And noting that, depending exactly on how the proposition is specified, I think we can be very confident that it hasnât happened yet. E.g. if the proposition under consideration was âa values lock-in event occurs such that everyone after this point has the same valuesâ.)
Thatâs interesting. Earlier I suggested that a mixture of different priors that included some like mine would give a result very different to your result. But you are right to say that we can interpret this in two ways: as a mixture of ur priors or as a mixture of priors we get after updating on the length of time so far. I was implicitly assuming the latter, but maybe the former is better and it would indeed lessen or eliminate the effect I mentioned.
Your suggestion is also interesting as a general approach, choosing a distribution over these Beta distributions instead of debating between certainty in (0,0), (0.5, 0.5), and (1,1). For some distributions over Beta parameters these the maths is probably quite tractable. That might be an answer to the right meta-rational approach rather than an answer to the right rational approach, or something, but it does seem nicely robust.
I donât understand this. Your last comment suggests that there may be several key events (some of which may be in the past), but I read your top-level comment as assuming that there is only one, which precludes all future key events (i.e. something like lock-in or extinction). I would have interpreted your initial post as follows:
Suppose we observe 20 past centuries during which no key event happens. By Laplaceâs Law of Succession, we now think that the odds are 1â22 in each century. So you could say that the odds that a key event âwould have occurredâ over the course of 20 centuries is 1 - (1-1/â22)^20 = 60.6%. However, we just said that we observed no key event, and thatâs what our âhazard rateâ is based on, so it is moot to ask what could have been. The probability is 0.
This seems off, and I think the problem is equating âno key eventâ with ânot hingyâ, which is too simple because one can potentially also influence key events in the distant future. (Or perhaps there arenât even any key events, or there are other ways to have a lasting impact.)
Under Tobyâs prior, what is the prior probability that the most influential century ever is in the past?
Quite high. If you think it hasnât happened yet, then this is a problem for my prior that Willâs doesnât have.
More precisely, the argument I sketched gives a prior whose PDF decays roughly as 1/ân^2 (which corresponds to the chance of it first happening in the next period after n absences decaying as ~1/ân). You might be able to get some tweaks to this such that it is less likely than not to happen by now, but I think the cleanest versions predict it would have happened by now. The clean version of Laplaceâs Law of Succession, measured in centuries, says there would only be a 1â2,001 chance it hadnât happened before now, which reflects poorly on the prior, but I donât think it quite serves to rule it out. If you donât know whether it has happened yet (e.g. you are unsure of things like Willâs Axial Age argument), this would give some extra weight to that possibility.
Given this, if one had a hyperprior over different possible Beta distributions, shouldnât 2000 centuries of no event occurring cause one to update quite hard against the (0.5, 0.5) or (1, 1) hyperparameters, and in favour of a prior that was massively skewed towards the per-century probability of no-lock-in-event being very low?
(And noting that, depending exactly on how the proposition is specified, I think we can be very confident that it hasnât happened yet. E.g. if the proposition under consideration was âa values lock-in event occurs such that everyone after this point has the same valuesâ.)
Thatâs interesting. Earlier I suggested that a mixture of different priors that included some like mine would give a result very different to your result. But you are right to say that we can interpret this in two ways: as a mixture of ur priors or as a mixture of priors we get after updating on the length of time so far. I was implicitly assuming the latter, but maybe the former is better and it would indeed lessen or eliminate the effect I mentioned.
Your suggestion is also interesting as a general approach, choosing a distribution over these Beta distributions instead of debating between certainty in (0,0), (0.5, 0.5), and (1,1). For some distributions over Beta parameters these the maths is probably quite tractable. That might be an answer to the right meta-rational approach rather than an answer to the right rational approach, or something, but it does seem nicely robust.
I donât understand this. Your last comment suggests that there may be several key events (some of which may be in the past), but I read your top-level comment as assuming that there is only one, which precludes all future key events (i.e. something like lock-in or extinction). I would have interpreted your initial post as follows:
Suppose we observe 20 past centuries during which no key event happens. By Laplaceâs Law of Succession, we now think that the odds are 1â22 in each century. So you could say that the odds that a key event âwould have occurredâ over the course of 20 centuries is 1 - (1-1/â22)^20 = 60.6%. However, we just said that we observed no key event, and thatâs what our âhazard rateâ is based on, so it is moot to ask what could have been. The probability is 0.
This seems off, and I think the problem is equating âno key eventâ with ânot hingyâ, which is too simple because one can potentially also influence key events in the distant future. (Or perhaps there arenât even any key events, or there are other ways to have a lasting impact.)