I think there are good reasons for preferring geometric mean of odds to simple mean when presenting data of this type, but not good enough that I’d take to the barricades over them. Linch (below) links to the same post I do in giving my reasons to believe this. Overall, however, this is an essay about distributions rather than point estimates so if your main objection is to the summary statistic I used then I think we agree on the material points, but have a disagreement about how the work should be presented.
On the point about betting odds, I note that the contest announcement also states “Applicants need not agree with or use our same conception of probability”. I think the way in which I actually disagree with the Future Fund is more radical than simple means vs geometric mean of odds—I think they ought to stop putting so much emphasis on summary statistics altogether.
Thanks for clarifying “geomean of probabilities” versus “geomean of odds elsethread. I agree that that resolves some (but not all) of my concerns with geomeaning.
I think the way in which I actually disagree with the Future Fund is more radical than simple means vs geometric mean of odds—I think they ought to stop putting so much emphasis on summary statistics altogether.
I agree with your pro-distribution position here, but I think you will be pleasantly surprised by how much reasoning over distributions goes into cost-benefit estimates at the Future Fund. This claim is based on nonpublic information, though, as those estimates have not yet been put up for public discussion. I will suggest, though, that it’s not an accident that Leopold Aschenbrenner is talking with QURI about improvements to Squiggle: https://github.com/quantified-uncertainty/squiggle/discussions
So my subjective take is that if the true issue is “you should reason over distributions of core parameters”, then in fact there’s little disagreement between you and the FF judges (which is good!), but it all adds up to normality (which is bad for the claim “moving to reasoning over distributions should move your subjective probabilities”).
If we’re focusing on the Worldview Prize question as posed (“should these probability estimates change?”), then I think the geo-vs-arith difference is totally cruxy—note that the arithmetic summary of your results (9.65%) is in line with the product of the baseline subjective probabilities for the prize (something like a 3% for loss-of-control x-risk before 2043; something like 9% before 2100).
I do think it’s reasonable to critique the fact that those point probabilities are presented without any indication that the path of reasoning goes through reasoning over distributions, though. So I personally am happy with this post calling attention to distributional reasoning, since it’s unclear in this case whether that is an update. I just don’t expect it to win the prizes for changing estimates.
Because I do think distributional reasoning is important, though, I do want to zoom in on the arith-vs-geo question (which I think, on reflection, is subtler than the position I took in my top-level comment). Rather than being a minor detail, I think this is important because it influences whether greater uncertainty tends to raise or lower our “fair betting odds” (which, at the end of the day, are the numbers that matter for how the FF decides to spend money).
I agree with Jamie and you and Linch that when pooling forecasts, it’s reasonable (maybe optimal? maybe not?) to use geomeans. So if you’re pooling expert forecasts of {1:1000, 1:100, 1:10}, you might have a subjective belief of something like “1:100, but with a ‘standard deviation’ of 6.5x to either side”. This is lower than the arithmean-pooled summary stats, and I think that’s directionally right.
I think this is an importantly different question from “how should you act when your subjective belief is a distribution like that. I think that if you have a subjective belief like “1%, but with a ‘standard deviation’ of 6.5x to either side”, you should push a button that gives you $98.8 if you’re right and loses $1.2 if you’re wrong. In particular, I think you should take the arithmean over your subjective distribution of beliefs (here, ~1.4%) and take bets that are good relative to that number. This will lead to decision-relevant effective probabilities that are higher than geomean-pooled point estimates (for small probabilities).
For this use-case (eg, “what bets should we make with our money”), I’d argue that you need to use a point estimate to decide what bets to make, and that you should make that point estimate by (1) geomean-pooling raw estimates of parameters, (2) reasoning over distributions of all parameters, then (3) taking arithmean of the resulting distribution-over-probabilities and (4) acting according to that mean probability.
In the case of the Worldview Prize, my interpretation is that the prize is described and judged in terms of (3), because that is the most directly valuable thing in terms of producing better (4)s.
An explicit case where I think it’s important to arithmean over your subjective distribution of beliefs:
coin A is fair
coin B is either 2% heads or 98% heads, you don’t know
you lose if either comes up tails.
So your p(win) is “either 1% or 49%”.
I claim the FF should push the button that pays us $80 if win, -$20 if lose, and in general make action decisions consistent with a point estimate of 25%. (I’m ignoring here the opportunity to seek value of information, which could be significant!).
It’s important not to use geomean-of-odds to produce your actions in this scenario; that gives you ~9.85%, and would imply you should avoid the +$80;-$20 button, which I claim is the wrong choice.
I agree that the arith-vs-geo question is basically the crux when it comes to whether this essay should move FF’s ‘fair betting probabilities’ - it sounds like everyone is pretty happy with the point about distributions and I’m really pleased about that because it was the main point I was trying to get across. I’m even more pleased that there is background work going on in the analysis of uncertainty space, because that’s an area where public statements by AI Risk organisations have sometimes lagged behind the state of the art in other risk management applications.
With respect to the crux, I hate to say it—because I’d love to be able to make as robust a claim for the prize as possible—but I’m not sure there is a principled reason for using geomean over arithmean for this application (or vice versa). The way I view it, they are both just snapshots of what is ‘really’ going on, which is the full distribution of possible outcomes given in the graphs / model. By analogy, I would be very suspicious of someone who always argued the arithmean would be a better estimate of central tendency than the median for every dataset / use case! I agree with you the problem of which is best for this particular dataset / use case is subtle, and I think I would characterise it as being a question of whether my manipulations of people’s forecasts have retained some essential ‘forecast-y’ characteristic which means geomean is more appropriate for various features it has, or whether they have been processed into having some sort of ‘outcome-y’ characteristic in which case arithmean is more appropriate. I take your point below in the coin example and the obvious superiority of arithmeans for that application, but my interpretation is that the FF didn’t intend for the ‘fair betting odds’ position to limit discussion about alternate ways to think about probabilities (“Applicants need not agree with or use our same conception of probability”)
However, to be absolutely clear, even if geomean was the right measure of central tendency I wouldn’t expect the judges to pay that particular attention—if all I had done was find a novel way of averaging results then my argument would basically be mathematical sophistry, perhaps only one step better than simply redefining ‘AI Risk’ until I got a result I liked. I think the distribution point is the actually valuable part of the essay, and I’m quite explicit in the essay that neither geomean nor arithmean is a good substitute for the full distribution. While I would obviously be delighted if I could also convince you my weak preference for geomean as a summary statistic was actually robust and considered, I’m actually not especially wedded to the argument for one summary statistic over the other. I did realise after I got my results that the crux for moving probabilities was going to be a very dry debate about different measures of central tendency, but I figured since the Fund was interested in essays on the theme of “a bunch of this AI stuff is basically right, but we should be focusing on entirely different aspects of the problem” (even if they aren’t being strictly solicited for the prize) the distribution bit of the essay might find a readership there anyway.
By the way, I know your four-step argument is intended just as a sketch of why you prefer arithmean for this application, but I do want to just flag up that I think it goes wrong on step 4, because acting according to arithmean probability (or geomean, for that matter) throws away information about distributions. As I mention here and elsewhere, I think the distribution issue is far more important than the geo-vs-arith issue, so while I don’t really feel strongly if I lose the prize because the judges don’t share my intuition that geomean is a slightly better measure of central tendency I would be sad to miss out because the distribution point was misunderstood! I describe in Section 5.2.2 how the distribution implied by my model would quite radically change some funding decisions, probably by more than an argument taking the arithmean to 3% (of course, if you’re already working on distribution issues then you’ve probably already reached those conclusions and so I won’t be changing your mind by making them—but in terms of publicly available arguments about AI Risk I’d defend the case that the distribution issue implies more radical redistribution of funds than changing the arithmean to 1.6%). So I think “act according to that mean probability” is wrong for many important decisions you might want to take—analogous to buying a lot of trousers with 1.97 legs in my example in the essay. No additional comment if that is what you meant though and were just using shorthand for that position.
I’d argue that you need to use a point estimate to decide what bets to make, and that you should make that point estimate by (1) geomean-pooling raw estimates of parameters, (2) reasoning over distributions of all parameters, then (3) taking arithmean of the resulting distribution-over-probabilities and (4) acting according to that mean probability.
I think “act according to that mean probability” is wrong for many important decisions you might want to take—analogous to buying a lot of trousers with 1.97 legs in my example in the essay. No additional comment if that is what you meant though and were just using shorthand for that position.
Clarifying, I do agree that there are some situations where you need something other than a subjective p(risk) to compare EV(value|action A) with EV(value|action B). I don’t actually know how to construct a clear analogy from the 1.97-legged trousers example if the variable we’re meaning is probabilities (though I agree that there are non-analogous examples; VOI for example).
I’ll go further, though, and claim that what really matters is what worlds the risk is distributed over, and that expanding the point-estimate probability to a distribution of probabilities, by itself, doesn’t add any real value. If it is to be a valuable exercise, you have to be careful what you’re expanding and what you’re refusing to expand.
More concretely, you want to be expanding over things your intervention won’t control, and then asking about your intervention’s effect at each point in things-you-won’t-control-space, then integrating back together. If you expand over any axis of uncertainty, then not only is there a multiplicity of valid expansions, but the natural interpretation will be misleading.
For example, say we have a 10% chance of drawing a dangerous ball from a series of urns, and 90% chance of drawing a safe one. If we describe it as (1) “50% chance of 9.9% risk, 50% chance of 10.1% risk” or (2) “50% chance of 19% risk, 50% chance of 1% risk” or (3) “10% chance of 99.1% risk, 90% chance of 0.1% risk”, what does it change our opinion of <intervention A>? (You can, of course, construct a two-step ball-drawing procedure that produces any of these distributions-over-probabilities.)
I think the natural intuition is that interventions are best in (2), because most probabilities of risk are middle-ish, and worst in (3), because probability of risk is near-determined. And this, I think, is analogous to the argument of the post that anti-AI-risk interventions are less valuable than the point-estimate probability would indicate.
But that argument assumes (and requires) that our interventions can only chance the second ball-drawing step, and not the first. So using that argument requires that, in the first place, we sliced the distribution up over things we couldn’t control. (If that is the thing we can control with our intervention, then interventions are best in the world of (3).)
Back to the argument of the original post: You’re deriving a distribution over several p(X|Y) parameters from expert surveys, and so the bottom-line distribution over total probabilities reflects the uncertainty in experts’ opinions on those conditional probabilities. Is it right to model our potential interventions as influencing the resolution of particular p(X|Y) rolls, or as influencing the distribution of p(X|Y) at a particular stage?
I claim it’s possible to argue either side.
Maybe a question like “p(much harder to build aligned than misaligned AGI | strong incentives to build AGI systems)” (the second survey question) is split between a quarter of the experts saying ~0% and three-quarters of the experts saying ~100%. (This extremizes the example, to sharpen the hypothetical analysis.) We interpret this as saying there’s a one-quarter chance we’re ~perfectly safe and a three-quarters chance that it’s hopeless to develop and aligned AGI instead of a misaligned one.
If we interpret that as if God will roll a die and put us in the “much harder” world with three-quarters probability and the “not much harder” world with one-quarters probability, then maybe our work to increase the we get an aligned AGI is low-value, because it’s unlikely to move either the ~0% or ~100% much lower (and we can’t change the die). If this was the only stage, then maybe all of working on AGI risk is worthless.
But “three-quarter chance it’s hopeless” is also consistent with a scenario where there’s a three-quarters chance that AGI development will be available to anyone, and many low-resourced actors will not have alignment teams and find it ~impossible to develop with alignment, but a one-quarter chance that AGI development will be available only to well-resourced actors, who will find it trivial to add on an alignment team and develop alignment. But then working on AGI risk might not be worthless, since we can work on increasing the chance that AGI development is only available to actors with alignment teams.
I claim that it isn’t clear, from the survey results, whether the distribution of experts’ probabilities for each step reflect something more like the God-rolls-a-die model, or different opinions about the default path of a thing we can intervene on. And if that’s not clear, then it’s not clear what to do with the distribution-over-probabilities from the main results. Probably they’re a step forward in our collective understanding, but I don’t think you can conclude from the high chances of low risk that there’s a low value to working on risk mitigation.
I think there are good reasons for preferring geometric mean of odds to simple mean when presenting data of this type, but not good enough that I’d take to the barricades over them. Linch (below) links to the same post I do in giving my reasons to believe this. Overall, however, this is an essay about distributions rather than point estimates so if your main objection is to the summary statistic I used then I think we agree on the material points, but have a disagreement about how the work should be presented.
On the point about betting odds, I note that the contest announcement also states “Applicants need not agree with or use our same conception of probability”. I think the way in which I actually disagree with the Future Fund is more radical than simple means vs geometric mean of odds—I think they ought to stop putting so much emphasis on summary statistics altogether.
Thanks for clarifying “geomean of probabilities” versus “geomean of odds elsethread. I agree that that resolves some (but not all) of my concerns with geomeaning.
I agree with your pro-distribution position here, but I think you will be pleasantly surprised by how much reasoning over distributions goes into cost-benefit estimates at the Future Fund. This claim is based on nonpublic information, though, as those estimates have not yet been put up for public discussion. I will suggest, though, that it’s not an accident that Leopold Aschenbrenner is talking with QURI about improvements to Squiggle: https://github.com/quantified-uncertainty/squiggle/discussions
So my subjective take is that if the true issue is “you should reason over distributions of core parameters”, then in fact there’s little disagreement between you and the FF judges (which is good!), but it all adds up to normality (which is bad for the claim “moving to reasoning over distributions should move your subjective probabilities”).
If we’re focusing on the Worldview Prize question as posed (“should these probability estimates change?”), then I think the geo-vs-arith difference is totally cruxy—note that the arithmetic summary of your results (9.65%) is in line with the product of the baseline subjective probabilities for the prize (something like a 3% for loss-of-control x-risk before 2043; something like 9% before 2100).
I do think it’s reasonable to critique the fact that those point probabilities are presented without any indication that the path of reasoning goes through reasoning over distributions, though. So I personally am happy with this post calling attention to distributional reasoning, since it’s unclear in this case whether that is an update. I just don’t expect it to win the prizes for changing estimates.
Because I do think distributional reasoning is important, though, I do want to zoom in on the arith-vs-geo question (which I think, on reflection, is subtler than the position I took in my top-level comment). Rather than being a minor detail, I think this is important because it influences whether greater uncertainty tends to raise or lower our “fair betting odds” (which, at the end of the day, are the numbers that matter for how the FF decides to spend money).
I agree with Jamie and you and Linch that when pooling forecasts, it’s reasonable (maybe optimal? maybe not?) to use geomeans. So if you’re pooling expert forecasts of {1:1000, 1:100, 1:10}, you might have a subjective belief of something like “1:100, but with a ‘standard deviation’ of 6.5x to either side”. This is lower than the arithmean-pooled summary stats, and I think that’s directionally right.
I think this is an importantly different question from “how should you act when your subjective belief is a distribution like that. I think that if you have a subjective belief like “1%, but with a ‘standard deviation’ of 6.5x to either side”, you should push a button that gives you $98.8 if you’re right and loses $1.2 if you’re wrong. In particular, I think you should take the arithmean over your subjective distribution of beliefs (here, ~1.4%) and take bets that are good relative to that number. This will lead to decision-relevant effective probabilities that are higher than geomean-pooled point estimates (for small probabilities).
If you’re combining multiple case parameters multiplicatively, then the arith>geo effect compounds as you introduce uncertainty in more places—if the quantity of interest is x*y, where x and y each had expert estimates of {1:1000, 1:100, 1:10} that we assume independent, then arithmean(x*y) is about twice geomean(x*y). Here’s a quick Squiggle showing what I mean: https://www.squiggle-language.com/playground/#code=eNqrVirOyC8PLs3NTSyqVLIqKSpN1QELuaZkluQXwUQy8zJLMhNzggtLM9PTc1KDS4oy89KVrJQqFGwVcvLT8%2FKLchNzNIAsDQM9A0NNHQ0jfWPNOAM9U82YvJi8SqJUVQFVVShoKVQCsaGBQUyeUi0A3tIyEg%3D%3D
For this use-case (eg, “what bets should we make with our money”), I’d argue that you need to use a point estimate to decide what bets to make, and that you should make that point estimate by (1) geomean-pooling raw estimates of parameters, (2) reasoning over distributions of all parameters, then (3) taking arithmean of the resulting distribution-over-probabilities and (4) acting according to that mean probability.
In the case of the Worldview Prize, my interpretation is that the prize is described and judged in terms of (3), because that is the most directly valuable thing in terms of producing better (4)s.
An explicit case where I think it’s important to arithmean over your subjective distribution of beliefs:
coin A is fair
coin B is either 2% heads or 98% heads, you don’t know
you lose if either comes up tails.
So your p(win) is “either 1% or 49%”.
I claim the FF should push the button that pays us $80 if win, -$20 if lose, and in general make action decisions consistent with a point estimate of 25%. (I’m ignoring here the opportunity to seek value of information, which could be significant!).
It’s important not to use geomean-of-odds to produce your actions in this scenario; that gives you ~9.85%, and would imply you should avoid the +$80;-$20 button, which I claim is the wrong choice.
I agree that the arith-vs-geo question is basically the crux when it comes to whether this essay should move FF’s ‘fair betting probabilities’ - it sounds like everyone is pretty happy with the point about distributions and I’m really pleased about that because it was the main point I was trying to get across. I’m even more pleased that there is background work going on in the analysis of uncertainty space, because that’s an area where public statements by AI Risk organisations have sometimes lagged behind the state of the art in other risk management applications.
With respect to the crux, I hate to say it—because I’d love to be able to make as robust a claim for the prize as possible—but I’m not sure there is a principled reason for using geomean over arithmean for this application (or vice versa). The way I view it, they are both just snapshots of what is ‘really’ going on, which is the full distribution of possible outcomes given in the graphs / model. By analogy, I would be very suspicious of someone who always argued the arithmean would be a better estimate of central tendency than the median for every dataset / use case! I agree with you the problem of which is best for this particular dataset / use case is subtle, and I think I would characterise it as being a question of whether my manipulations of people’s forecasts have retained some essential ‘forecast-y’ characteristic which means geomean is more appropriate for various features it has, or whether they have been processed into having some sort of ‘outcome-y’ characteristic in which case arithmean is more appropriate. I take your point below in the coin example and the obvious superiority of arithmeans for that application, but my interpretation is that the FF didn’t intend for the ‘fair betting odds’ position to limit discussion about alternate ways to think about probabilities (“Applicants need not agree with or use our same conception of probability”)
However, to be absolutely clear, even if geomean was the right measure of central tendency I wouldn’t expect the judges to pay that particular attention—if all I had done was find a novel way of averaging results then my argument would basically be mathematical sophistry, perhaps only one step better than simply redefining ‘AI Risk’ until I got a result I liked. I think the distribution point is the actually valuable part of the essay, and I’m quite explicit in the essay that neither geomean nor arithmean is a good substitute for the full distribution. While I would obviously be delighted if I could also convince you my weak preference for geomean as a summary statistic was actually robust and considered, I’m actually not especially wedded to the argument for one summary statistic over the other. I did realise after I got my results that the crux for moving probabilities was going to be a very dry debate about different measures of central tendency, but I figured since the Fund was interested in essays on the theme of “a bunch of this AI stuff is basically right, but we should be focusing on entirely different aspects of the problem” (even if they aren’t being strictly solicited for the prize) the distribution bit of the essay might find a readership there anyway.
By the way, I know your four-step argument is intended just as a sketch of why you prefer arithmean for this application, but I do want to just flag up that I think it goes wrong on step 4, because acting according to arithmean probability (or geomean, for that matter) throws away information about distributions. As I mention here and elsewhere, I think the distribution issue is far more important than the geo-vs-arith issue, so while I don’t really feel strongly if I lose the prize because the judges don’t share my intuition that geomean is a slightly better measure of central tendency I would be sad to miss out because the distribution point was misunderstood! I describe in Section 5.2.2 how the distribution implied by my model would quite radically change some funding decisions, probably by more than an argument taking the arithmean to 3% (of course, if you’re already working on distribution issues then you’ve probably already reached those conclusions and so I won’t be changing your mind by making them—but in terms of publicly available arguments about AI Risk I’d defend the case that the distribution issue implies more radical redistribution of funds than changing the arithmean to 1.6%). So I think “act according to that mean probability” is wrong for many important decisions you might want to take—analogous to buying a lot of trousers with 1.97 legs in my example in the essay. No additional comment if that is what you meant though and were just using shorthand for that position.
Clarifying, I do agree that there are some situations where you need something other than a subjective p(risk) to compare EV(value|action A) with EV(value|action B). I don’t actually know how to construct a clear analogy from the 1.97-legged trousers example if the variable we’re meaning is probabilities (though I agree that there are non-analogous examples; VOI for example).
I’ll go further, though, and claim that what really matters is what worlds the risk is distributed over, and that expanding the point-estimate probability to a distribution of probabilities, by itself, doesn’t add any real value. If it is to be a valuable exercise, you have to be careful what you’re expanding and what you’re refusing to expand.
More concretely, you want to be expanding over things your intervention won’t control, and then asking about your intervention’s effect at each point in things-you-won’t-control-space, then integrating back together. If you expand over any axis of uncertainty, then not only is there a multiplicity of valid expansions, but the natural interpretation will be misleading.
For example, say we have a 10% chance of drawing a dangerous ball from a series of urns, and 90% chance of drawing a safe one. If we describe it as (1) “50% chance of 9.9% risk, 50% chance of 10.1% risk” or (2) “50% chance of 19% risk, 50% chance of 1% risk” or (3) “10% chance of 99.1% risk, 90% chance of 0.1% risk”, what does it change our opinion of <intervention A>? (You can, of course, construct a two-step ball-drawing procedure that produces any of these distributions-over-probabilities.)
I think the natural intuition is that interventions are best in (2), because most probabilities of risk are middle-ish, and worst in (3), because probability of risk is near-determined. And this, I think, is analogous to the argument of the post that anti-AI-risk interventions are less valuable than the point-estimate probability would indicate.
But that argument assumes (and requires) that our interventions can only chance the second ball-drawing step, and not the first. So using that argument requires that, in the first place, we sliced the distribution up over things we couldn’t control. (If that is the thing we can control with our intervention, then interventions are best in the world of (3).)
Back to the argument of the original post: You’re deriving a distribution over several p(X|Y) parameters from expert surveys, and so the bottom-line distribution over total probabilities reflects the uncertainty in experts’ opinions on those conditional probabilities. Is it right to model our potential interventions as influencing the resolution of particular p(X|Y) rolls, or as influencing the distribution of p(X|Y) at a particular stage?
I claim it’s possible to argue either side.
Maybe a question like “p(much harder to build aligned than misaligned AGI | strong incentives to build AGI systems)” (the second survey question) is split between a quarter of the experts saying ~0% and three-quarters of the experts saying ~100%. (This extremizes the example, to sharpen the hypothetical analysis.) We interpret this as saying there’s a one-quarter chance we’re ~perfectly safe and a three-quarters chance that it’s hopeless to develop and aligned AGI instead of a misaligned one.
If we interpret that as if God will roll a die and put us in the “much harder” world with three-quarters probability and the “not much harder” world with one-quarters probability, then maybe our work to increase the we get an aligned AGI is low-value, because it’s unlikely to move either the ~0% or ~100% much lower (and we can’t change the die). If this was the only stage, then maybe all of working on AGI risk is worthless.
But “three-quarter chance it’s hopeless” is also consistent with a scenario where there’s a three-quarters chance that AGI development will be available to anyone, and many low-resourced actors will not have alignment teams and find it ~impossible to develop with alignment, but a one-quarter chance that AGI development will be available only to well-resourced actors, who will find it trivial to add on an alignment team and develop alignment. But then working on AGI risk might not be worthless, since we can work on increasing the chance that AGI development is only available to actors with alignment teams.
I claim that it isn’t clear, from the survey results, whether the distribution of experts’ probabilities for each step reflect something more like the God-rolls-a-die model, or different opinions about the default path of a thing we can intervene on. And if that’s not clear, then it’s not clear what to do with the distribution-over-probabilities from the main results. Probably they’re a step forward in our collective understanding, but I don’t think you can conclude from the high chances of low risk that there’s a low value to working on risk mitigation.