I think we are roughly in agreement on this, it is just hard to talk about. I think that compression of the set of expert estimates down to a single measure of central tendency (e.g. the arithmetic mean) loses information about the distribution that is needed to give the right answer in each of a variety of situations. So in this sense, we shouldn’t aggregate first.
The ideal system would neither aggregate first into a single number, nor use each estimate independently and then aggregate from there (I suggested doing so as a contrast to aggregation first, but agree that it is not ideal). Instead, the ideal system would use the whole distribution of estimates (perhaps transformed based on some underlying model about where expert judgments come from, such as assuming that numbers between the point estimates are also plausible) and then doing some kind of EV calculation based on that. But this is so general an approach as to not offer much guidance, without further development.
The ideal system would [not] aggregate first into a single number [...] Instead, the ideal system would use the whole distribution of estimates
I have been thinking a bit more about this.
And I have concluded that the ideal aggregation procedure should compress all the information into a single prediction—our best guess for the actual distribution of the event.
Concretely, I think that in an idealized framework we should be treating the expert predictions p1,...,pN as Bayesian evidence for the actual distribution of the event of interest E. That is, the idealized aggregation ^p should just match the conditional probability of the event given the predictions: ^p=P(E|p1,...,pN)∝P(E)P(p1,...,pN|E).
Of course, for this procedure to be practical you need to know the generative model for the individual predictions P(p1,...,pN|E). This is for the most part not realistic—the generative model needs to take into account details of how each forecaster is generating the prediction and the redundance of information between the predictions. So in practice we will need to approximate the aggregate measure using some sort of heuristic.
But, crucially, the approximation does not depend on the downstream task we intend to use the aggregate prediction for.
This is something hard for me to wrap my head around, since I too feel the intuitive grasp of wanting to retain information about eg the spread of the individual probabilities. I would feel more nervous making decisions when the forecasters widly disagree with each other, as opposed to when the forecasters are of one voice.
What is this intuition then telling us? What do we need the information about the spread for then?
My answer is that we need to understand the resilience of the aggregated prediction to new information. This already plays a role in the aggregated prediction, since it helps us weight the relative importance we should give to our prior beliefs P(E) vs the evidence from the experts P(p1,...,pn|E) - a wider spread or a smaller number of forecaster predictions will lead to weaker evidence, and therefore a higher relative weighting of our priors.
Similarly, the spread of distributions gives us information about how much would we gain from additional predictions.
I think this neatly resolves the tension between aggregating vs not, and clarifies when it is important to retain information about the distribution of forecasts: when value of information is relevant. Which, admittedly, is quite often! But when we cannot acquire new information, or we can rule out value of information as decision-relevant, then we should aggregate first into a single number, and make decisions based on our best guess, regardless of the task.
My answer is that we need to understand the resilience of the aggregated prediction to new information.
This seems roughly right to me. And in particular, I think this highlights the issue with the example of institutional failure. The problem with aggregating predictions to a single guess p of annual failure, and then using p to forecast, is that it assumes that the probability of failure in each year is independent from our perspective. But in fact, each year of no failure provides evidence that the risk of failure is low. And if the forecasters’ estimates initially had a wide spread, then we’re very sensitive to new information, and so we should update more on each passing year. This would lead to a high probability of failure in the first few years, but still a moderately high expected lifetime.
I don’t think I get your argument for why the approximation should not depend on the downstream task. Could you elaborate?
I am also a bit confused about the relationship between spread and resiliency: a larger spread of forecasts does not seem to necessarily imply weaker evidence: It seems like for a relatively rare event about which some forecasters could acquire insider information, a large spread might give you stronger evidence.
Imagine E is about the future enactment of a quite unusual government policy, and one of your forecasters is a high ranking government official. Then, if all of your forecasters are relatively well calibrated and have sufficient incentive to report their true beliefs, a 90% forecast for E by the government official and a 1% forecast by everyone else should likely shift your beliefs a lot more towards E than a 10% forecast by everyone.
I don’t think I get your argument for why the approximation should not depend on the downstream task. Could you elaborate?
Your best approximation of the summary distribution ^p=P(E|p1,...,pN) is already “as good as it can get”. You think we should be cautious and treat this probability as if it could be higher for precautionary reasons? Then I argue that you should treat it as higher, regardless of how you arrived at the estimate.
In the end this circles back to basic Bayesian / Utility theory—in the idealized framework your credences about an event should be represented as a single probability. Departing from this idealization requires further justification.
a larger spread of forecasts does not seem to necessarily imply weaker evidence
You are right that “weaker evidence” is not exactly correct—this is more about the expected variance introduced by hypothetical additional predictions. I’ve realized I am confused about what is the best way to think about this in formal terms, so I wonder if my intuition was right after all.
I think we are roughly in agreement on this, it is just hard to talk about. I think that compression of the set of expert estimates down to a single measure of central tendency (e.g. the arithmetic mean) loses information about the distribution that is needed to give the right answer in each of a variety of situations. So in this sense, we shouldn’t aggregate first.
The ideal system would neither aggregate first into a single number, nor use each estimate independently and then aggregate from there (I suggested doing so as a contrast to aggregation first, but agree that it is not ideal). Instead, the ideal system would use the whole distribution of estimates (perhaps transformed based on some underlying model about where expert judgments come from, such as assuming that numbers between the point estimates are also plausible) and then doing some kind of EV calculation based on that. But this is so general an approach as to not offer much guidance, without further development.
I have been thinking a bit more about this.
And I have concluded that the ideal aggregation procedure should compress all the information into a single prediction—our best guess for the actual distribution of the event.
Concretely, I think that in an idealized framework we should be treating the expert predictions p1,...,pN as Bayesian evidence for the actual distribution of the event of interest E. That is, the idealized aggregation ^p should just match the conditional probability of the event given the predictions: ^p=P(E|p1,...,pN)∝P(E)P(p1,...,pN|E).
Of course, for this procedure to be practical you need to know the generative model for the individual predictions P(p1,...,pN|E). This is for the most part not realistic—the generative model needs to take into account details of how each forecaster is generating the prediction and the redundance of information between the predictions. So in practice we will need to approximate the aggregate measure using some sort of heuristic.
But, crucially, the approximation does not depend on the downstream task we intend to use the aggregate prediction for.
This is something hard for me to wrap my head around, since I too feel the intuitive grasp of wanting to retain information about eg the spread of the individual probabilities. I would feel more nervous making decisions when the forecasters widly disagree with each other, as opposed to when the forecasters are of one voice.
What is this intuition then telling us? What do we need the information about the spread for then?
My answer is that we need to understand the resilience of the aggregated prediction to new information. This already plays a role in the aggregated prediction, since it helps us weight the relative importance we should give to our prior beliefs P(E) vs the evidence from the experts P(p1,...,pn|E) - a wider spread or a smaller number of forecaster predictions will lead to weaker evidence, and therefore a higher relative weighting of our priors.
Similarly, the spread of distributions gives us information about how much would we gain from additional predictions.
I think this neatly resolves the tension between aggregating vs not, and clarifies when it is important to retain information about the distribution of forecasts: when value of information is relevant. Which, admittedly, is quite often! But when we cannot acquire new information, or we can rule out value of information as decision-relevant, then we should aggregate first into a single number, and make decisions based on our best guess, regardless of the task.
This seems roughly right to me. And in particular, I think this highlights the issue with the example of institutional failure. The problem with aggregating predictions to a single guess p of annual failure, and then using p to forecast, is that it assumes that the probability of failure in each year is independent from our perspective. But in fact, each year of no failure provides evidence that the risk of failure is low. And if the forecasters’ estimates initially had a wide spread, then we’re very sensitive to new information, and so we should update more on each passing year. This would lead to a high probability of failure in the first few years, but still a moderately high expected lifetime.
I think this is a good account of the institutional failure example, thank you!
I don’t think I get your argument for why the approximation should not depend on the downstream task. Could you elaborate?
I am also a bit confused about the relationship between spread and resiliency: a larger spread of forecasts does not seem to necessarily imply weaker evidence: It seems like for a relatively rare event about which some forecasters could acquire insider information, a large spread might give you stronger evidence.
Imagine E is about the future enactment of a quite unusual government policy, and one of your forecasters is a high ranking government official. Then, if all of your forecasters are relatively well calibrated and have sufficient incentive to report their true beliefs, a 90% forecast for E by the government official and a 1% forecast by everyone else should likely shift your beliefs a lot more towards E than a 10% forecast by everyone.
Your best approximation of the summary distribution ^p=P(E|p1,...,pN) is already “as good as it can get”. You think we should be cautious and treat this probability as if it could be higher for precautionary reasons? Then I argue that you should treat it as higher, regardless of how you arrived at the estimate.
In the end this circles back to basic Bayesian / Utility theory—in the idealized framework your credences about an event should be represented as a single probability. Departing from this idealization requires further justification.
You are right that “weaker evidence” is not exactly correct—this is more about the expected variance introduced by hypothetical additional predictions. I’ve realized I am confused about what is the best way to think about this in formal terms, so I wonder if my intuition was right after all.