Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Iâm confused about the methodology here. Laplaceâs law of succession seems dimensionless. How do you get something with units of âyearsâ out of it? Couldnât you just as easily have looked at the probability of the conjecture being proven on a given day, or month, or martian year, and come up with a different distribution?
Iâm also confused about what this experiment will tell us about the utility of Laplaceâs law outside of the realm of mathematical conjectures. If you used the same logic to estimate human life expectancy, for example, it would clearly be very wrong. If Laplaceâs rule has a hope of being useful, it seems it would only be after taking some kind of average performance over a variety of different domains. I donât think its usefulness in one particular domain should tell us very much.
I model a calendar year as a trial attempt. See here (and the first comment in that post) for a timeless version.
I think that this issue ends up being moot in practice. If we think in terms of something other than years, Laplace would give:
1â(1â1/(dâ n+2))d
where if e.g., we are thinking in terms of months, d=12
instead of
1/(n+2)
But if we look at the Taylor expansion for the first expression, we notice that its constant factor is
1/(n+2)
and in practice, I think that the further terms are going to be pretty small when n reasonably large.
Alternatively, you can notice that
(1â1/(dâ n))d converges to the n-th root of e, and that it does so fairly quickly.
Edit: This comment is wrong and Iâm now very embarrassed by it. It was based on a misunderstanding of what the NunoSempere is doing that would have been resolved by a more careful read of the first sentence of the forum post!
Thank you for the link to the timeless version, that is nice!
But I donât agree with your argument that this issue is moot in practice. I think you should repeat your R analysis with months instead of years, and see how your predicted percentiles change. I predict they will all be precisely 12 times smaller (willing to bet a small amount on this).
This follows from dimensional analysis. How does the R script know what a year is? Only because you picked a year as your trial. If you repeat your analysis using a month as a trial attempt, your predicted mean proof time will then be X months instead of X years (i.e. 12 times smaller).
The same goes for any other dimensionful quantity youâve computed, like the percentiles.
You could try to apply the linked timeless version instead, although I think youâd find you run into insurmountable regularization problems around t=0, for exactly the same reasons. You canât get something dimensionful out of something dimensionless. The analysis doesnât know what a second is. The timeless version works when applied retrospectively, but it wonât work predicting forward from scratch like youâre trying to do here, unless you use some kind of prior to set a time-scale.
Consider a conjecture first made twenty years ago.
If I look at a year as the trial period:
n=20, probability predicted by Laplace of being solved in the next year = 1/â(n+2) = 1â22 ~= 4.5%
If I look at a month at the trial period:
n = 20 * 12, probability predicted by Laplace of being solved in the next year = the probability that it isnât solved in any of twelve months = 1 - (1-1/â(n+2))^12 = 4.8%
As mentioned, both are pretty similar.
Apologies, I misunderstood a fundamental aspect of what youâre doing! For some reason in my head youâd picked a set of conjectures which had just been posited this year, and were seeing how Laplaceâs rule of succession would perform when using it to extrapolate forward with no historical input.
I donât know where I got this wrong impression from, because you state very clearly what youâre doing in the first sentence of your post. I should have read it more carefully before making the bold claims in my last comment. I actually even had a go at stating the terms of the bet I suggested before quickly realising what Iâd missed and retracting. But if you want to hold me to it you can (I might be interpreting the forum wrong but I think you can still see the deleted comment?)
Iâm not embarrassed by my original concern about the dimensions, but your original reply addressed them nicely and I can see it likely doesnât make a huge difference here whether you take a year or a month, at least as long as the conjecture was posited a good number of years ago (in the limit that âtrial periodâ/ââtime since positedâ goes to zero, you presumably recover the timeless result you referenced).
New EA forum suggestion: you should be able to disagree with your own comments.
Hey, Iâm not in the habit of turning down free money, so feel free to make a small donation to https://ââwww.every.org/ââquantifieduncertainty
Sure, will do!
I like this idea! Quick question: Have you considered whether, for a version of this that uses past data/âconjectures, one could use existing data compiled by AI Impacts rather than the Wikipedia article from 2015 (as you suggest)?
(Though I guess if you go back in time sufficiently far, it arguably becomes less clear whether Laplaceâs rule is a plausible model. E.g., did mathematicians in any sense âtryâ to square the circle in every year between Antiquity and 1882?)
I wish I had known about the AI Impacts data sooner.
As the point out, looking at remembered conjectures maybe adds some bias. But then later in their post, they mention:
Which could also be used to answer this question. But in their dataset, I donât see any conjectures proved between 2014 and 2020, which is odd.
Anyways, thanks for the reference!
I would have thought that âall conjecturesâ is a pretty natural reference class for this problem, and Laplace is typically used when we donât have such prior informationâthough if the resolution rate diverges substantially from the Laplace rule prediction I think it would still be interesting.
I think, because we expect the resolution rate of different conjectures to be correlated, this experiment is a bit like a single draw from a distribution over annual resolution probabilities rather than many draws from such a distribution ( if you can forgive a little frequentism).
I agree, but then youâd have to come up with a dataset of conjectures.
Yep!
I think that my thinking here is:
We could model the chance of a conjecture being resolved with reference to internal details. For instance, we could look at the increasing number of mathematicians, at how hard a given conjecture seems, etc.
However, that modelling is tricky, and in some cases the assumptions could be ambiguous
But we could also use Laplaceâs rule of succession. This has the disadvantage that it doesnât capture the inner structure of the model, but it has the advantage that it is simple, and perhaps more robust. The question is, does it really work? And then I was looking at one particular case which I could be somewhat informative.
I think I used to like Laplaceâs law a bit more in the past, for some of those reasons. But I now like it a bit less, because maybe it fails to capture the inner structure of what is predicting.
I agree. On the other hand, I kind of expect to be informative nonetheless.