I don’t think I get your argument for why the approximation should not depend on the downstream task. Could you elaborate?
Your best approximation of the summary distribution ^p=P(E|p1,...,pN) is already “as good as it can get”. You think we should be cautious and treat this probability as if it could be higher for precautionary reasons? Then I argue that you should treat it as higher, regardless of how you arrived at the estimate.
In the end this circles back to basic Bayesian / Utility theory—in the idealized framework your credences about an event should be represented as a single probability. Departing from this idealization requires further justification.
a larger spread of forecasts does not seem to necessarily imply weaker evidence
You are right that “weaker evidence” is not exactly correct—this is more about the expected variance introduced by hypothetical additional predictions. I’ve realized I am confused about what is the best way to think about this in formal terms, so I wonder if my intuition was right after all.
Your best approximation of the summary distribution ^p=P(E|p1,...,pN) is already “as good as it can get”. You think we should be cautious and treat this probability as if it could be higher for precautionary reasons? Then I argue that you should treat it as higher, regardless of how you arrived at the estimate.
In the end this circles back to basic Bayesian / Utility theory—in the idealized framework your credences about an event should be represented as a single probability. Departing from this idealization requires further justification.
You are right that “weaker evidence” is not exactly correct—this is more about the expected variance introduced by hypothetical additional predictions. I’ve realized I am confused about what is the best way to think about this in formal terms, so I wonder if my intuition was right after all.