I think forecasts about what some actor (a person, organisation, community, etc.) will overall believe in future about X can add value compared to just having a large set of forecasts about specific events that are relevant to X. This is because the former type of forecast can also account for:
how the actor will interpret the evidence that those specific events provide regarding X
lots of events we might not think to specifically forecast that could be relevant to X
On the other hand, forecasts about what some actor will believe in future about X seem more at risk of causing undesirable feedback loops and distorted beliefs than forecasts about specific events relevant to X do.
I think forecasting the donation split of the EA Funds[1] would be interesting, and could be useful. This seems to be a forecast of a specific event that’s unusually well correlated with an actor’s overall beliefs. I think that means it would have more of both the benefits and the risks mentioned above than the typical forecast of a specific event would, but less than a forecast that’s directly about an actor’s overall belief would.
This also makes me think that another thing potentially worth considering is predicting the beliefs of an actor which:
is a subset of the EA community[2]
seems to have a good process of forming beliefs
seems likely to avoid updating problematically based on the forecast
Some spitballed examples, to illustrate the basic idea: Paul Christiano, Toby Ord, a survey of CEA staff, a survey of Open Phil staff.
This would still pose a risk of causing the EA community to update too strongly on erroneous forecasts of what this actor will believe. But it seems to at least reduce the risk of self-fulfilling prophecies/feedback loops, which somewhat blunts the effect.
I’m pretty sure this sort of thing has been done before (e.g., sort-of, here). But this is a rationale for doing it that I hadn’t thought of before.
But this is just like a list of considerations and options. I don’t know how to actually weigh it all up to work out what’s best.
[1] I assume you mean EA Funds rather than the EA Forum or the Effective Altruism Foundation—lots of EAFs floating about!
[2] I only give this criterion because of the particular context and goals at hand; there are of course many actors outside the EA community whose beliefs we should attend to.
To lay out my tentative position a bit more:
I think forecasts about what some actor (a person, organisation, community, etc.) will overall believe in future about X can add value compared to just having a large set of forecasts about specific events that are relevant to X. This is because the former type of forecast can also account for:
how the actor will interpret the evidence that those specific events provide regarding X
lots of events we might not think to specifically forecast that could be relevant to X
On the other hand, forecasts about what some actor will believe in future about X seem more at risk of causing undesirable feedback loops and distorted beliefs than forecasts about specific events relevant to X do.
I think forecasting the donation split of the EA Funds[1] would be interesting, and could be useful. This seems to be a forecast of a specific event that’s unusually well correlated with an actor’s overall beliefs. I think that means it would have more of both the benefits and the risks mentioned above than the typical forecast of a specific event would, but less than a forecast that’s directly about an actor’s overall belief would.
This also makes me think that another thing potentially worth considering is predicting the beliefs of an actor which:
is a subset of the EA community[2]
seems to have a good process of forming beliefs
seems likely to avoid updating problematically based on the forecast
Some spitballed examples, to illustrate the basic idea: Paul Christiano, Toby Ord, a survey of CEA staff, a survey of Open Phil staff.
This would still pose a risk of causing the EA community to update too strongly on erroneous forecasts of what this actor will believe. But it seems to at least reduce the risk of self-fulfilling prophecies/feedback loops, which somewhat blunts the effect.
I’m pretty sure this sort of thing has been done before (e.g., sort-of, here). But this is a rationale for doing it that I hadn’t thought of before.
But this is just like a list of considerations and options. I don’t know how to actually weigh it all up to work out what’s best.
[1] I assume you mean EA Funds rather than the EA Forum or the Effective Altruism Foundation—lots of EAFs floating about!
[2] I only give this criterion because of the particular context and goals at hand; there are of course many actors outside the EA community whose beliefs we should attend to.