This comment is exactly right, although it seems I came across stronger on the point about geometric mean of odds than I intended to. I wanted to say basically exactly what you did in this comment—there are relatively sound reasons to treat geometric mean of odds as the default in this case, but that there was a reasonable argument for simple means too. For example see footnotes 7 and 9 where I make this point. What I wanted to get across was that the argument about simple means vs geometric mean of odds was likely not the most productive argument to be having—point estimates always (necessarily) summarise the underlying distribution of data, and it is dangerous to merely use summary statistics when the distribution itself contains interesting and actionable information
Just for clarity—I use geometric mean of odds, which I then convert back into probability as an additional step (because people are more familiar with probability than odds). If I said anywhere that I took the geometric mean of probabilities then this is a typo and I will correct it!
What I wanted to get across was that the argument about simple means vs geometric mean of odds was likely not the most productive argument to be having—point estimates always (necessarily) summarise the underlying distribution of data, and it is dangerous to merely use summary statistics when the distribution itself contains interesting and actionable information
I agree about this in general but I’m skeptical about treating distributions of probabilities the same way we treat distributions of quantities.
Perhaps more importantly, I assumed that the FTX FF got their numbers for reasons other than deferring to the forecasts of random rationalists. If I’m correct, this leads me to think that sophisticated statistics on top of the forecasts of random rationalists is unlikely to change their minds.
Just for clarity—I use geometric mean of odds, which I then convert back into probability as an additional step (because people are more familiar with probability than odds). If I said anywhere that I took the geometric mean of probabilities then this is a typo and I will correct it!
Thanks! This is my fault for commenting before checking the math first! However, I think you could’ve emphasized what you actually did more. You did not say “geometric mean of probabilities.” But you also did not say “geometric mean of odds” anywhere except in footnote 7 and this comment. In the main text, you only said “geometric mean” and the word “probability” was frequently in surrounding texts.
I think that’s a fair criticism. For all I know, the FF are not at all uncertain about their estimates (or at least not uncertain over order-of-magnitude) and so the SDO mechanism doesn’t come into play. I still think there is value in explicitly and systematically considering uncertainty, even if you end up concluding it doesn’t really matter for your specific beliefs -if only because you can’t be totally confident it doesn’t matter until you have actually done the maths.
I’ve updated the text to replace ‘geometric mean’ with ‘geometric mean of odds’ everywhere it occurs. Thanks so much for the close reading and spotting the error.
I’ve updated the text to replace ‘geometric mean’ with ‘geometric mean of odds’ everywhere it occurs. Thanks so much for the close reading and spotting the error.
Thanks! Though it’s not so much an error as just moderately confusing communication.
As you probably already know, I think one advantage of geometric mean of odds over probabilities is that it directly addresses one of Ross’s objections:
> Consider an experiment where you flip a fair coin A. If A is heads you flip a 99%heads coin B; if A is tails you flip a 1%heads coin B. We’re interested in forming a subjective probability that B is heads.
The answer I find intuitive for p(B=heads) is 50%, which is achieved by taking the arithmetic average over worlds. The geometric average over worlds gives 9.9% instead, which doesn’t seem like “fair betting odds” for B being heads under any natural interpretation of those words. What’s worse, the geometric-mean methodology suggests a 9.9% subjective probability of tails, and then p(H)+p(T) does not add to 1.
Geomean of odds of 99% heads and 1% heads is
sqrt(99∗1):sqrt(1∗99)=1:1=50
More generally, geomean of X:Y and Y:X is 50%, and geomean of odds is equally sensitive to outlier probabilities in both directions (whereas geomean of probabilities is only sensitive to outlierly low probabilities).
This comment is exactly right, although it seems I came across stronger on the point about geometric mean of odds than I intended to. I wanted to say basically exactly what you did in this comment—there are relatively sound reasons to treat geometric mean of odds as the default in this case, but that there was a reasonable argument for simple means too. For example see footnotes 7 and 9 where I make this point. What I wanted to get across was that the argument about simple means vs geometric mean of odds was likely not the most productive argument to be having—point estimates always (necessarily) summarise the underlying distribution of data, and it is dangerous to merely use summary statistics when the distribution itself contains interesting and actionable information
Just for clarity—I use geometric mean of odds, which I then convert back into probability as an additional step (because people are more familiar with probability than odds). If I said anywhere that I took the geometric mean of probabilities then this is a typo and I will correct it!
I agree about this in general but I’m skeptical about treating distributions of probabilities the same way we treat distributions of quantities.
Perhaps more importantly, I assumed that the FTX FF got their numbers for reasons other than deferring to the forecasts of random rationalists. If I’m correct, this leads me to think that sophisticated statistics on top of the forecasts of random rationalists is unlikely to change their minds.
Thanks! This is my fault for commenting before checking the math first! However, I think you could’ve emphasized what you actually did more. You did not say “geometric mean of probabilities.” But you also did not say “geometric mean of odds” anywhere except in footnote 7 and this comment. In the main text, you only said “geometric mean” and the word “probability” was frequently in surrounding texts.
I think that’s a fair criticism. For all I know, the FF are not at all uncertain about their estimates (or at least not uncertain over order-of-magnitude) and so the SDO mechanism doesn’t come into play. I still think there is value in explicitly and systematically considering uncertainty, even if you end up concluding it doesn’t really matter for your specific beliefs -if only because you can’t be totally confident it doesn’t matter until you have actually done the maths.
I’ve updated the text to replace ‘geometric mean’ with ‘geometric mean of odds’ everywhere it occurs. Thanks so much for the close reading and spotting the error.
Thanks! Though it’s not so much an error as just moderately confusing communication.
As you probably already know, I think one advantage of geometric mean of odds over probabilities is that it directly addresses one of Ross’s objections:
Geomean of odds of 99% heads and 1% heads is
sqrt(99∗1):sqrt(1∗99)=1:1=50
More generally, geomean of X:Y and Y:X is 50%, and geomean of odds is equally sensitive to outlier probabilities in both directions (whereas geomean of probabilities is only sensitive to outlierly low probabilities).
I agree that geomean-of-odds performs better than geomean-of-probs!
I still think it has issues for converting your beliefs to actions, but I collected that discussion under a cousin comment here: https://forum.effectivealtruism.org/posts/Z7r83zrSXcis6ymKo/dissolving-ai-risk-parameter-uncertainty-in-ai-future?commentId=9LxG3WDa4QkLhT36r