In terms of āhow big a deal will X be, there are several questions already of that form.
Do you mean questions like āwhat will the state of AI/āclimate change/āpandemics beā (as Khorton suggests), or things like āHow big a deal will Group A think X isā? I assume the former?
The difficulty with āwhat will people in general think about Xā type questions is how to operationalise them, but thereās potentially enough danger in not doing this for it to be worth the tradeoff.
Iām not sure I know what you mean by this (particularly the part after the comma).
Iām not sure you I know what you mean by this (particularly the part after the comma).
The not was in the wrong place, have fixed now.
I had briefly got in touch with rethink about trying to predict survey outcomes, but Iām not going ahead with this for now as the conerns your raised seem bad if low probability. Iām consider, as an alternative, asking about the donation split of EAF in ~5 years, which I think kind of tracks related ideas but seems to have less downside risk of the form you describe.
I think forecasts about what some actor (a person, organisation, community, etc.) will overall believe in future about X can add value compared to just having a large set of forecasts about specific events that are relevant to X. This is because the former type of forecast can also account for:
how the actor will interpret the evidence that those specific events provide regarding X
lots of events we might not think to specifically forecast that could be relevant to X
On the other hand, forecasts about what some actor will believe in future about X seem more at risk of causing undesirable feedback loops and distorted beliefs than forecasts about specific events relevant to X do.
I think forecasting the donation split of the EA Funds[1] would be interesting, and could be useful. This seems to be a forecast of a specific event thatās unusually well correlated with an actorās overall beliefs. I think that means it would have more of both the benefits and the risks mentioned above than the typical forecast of a specific event would, but less than a forecast thatās directly about an actorās overall belief would.
This also makes me think that another thing potentially worth considering is predicting the beliefs of an actor which:
is a subset of the EA community[2]
seems to have a good process of forming beliefs
seems likely to avoid updating problematically based on the forecast
Some spitballed examples, to illustrate the basic idea: Paul Christiano, Toby Ord, a survey of CEA staff, a survey of Open Phil staff.
This would still pose a risk of causing the EA community to update too strongly on erroneous forecasts of what this actor will believe. But it seems to at least reduce the risk of self-fulfilling prophecies/āfeedback loops, which somewhat blunts the effect.
Iām pretty sure this sort of thing has been done before (e.g., sort-of, here). But this is a rationale for doing it that I hadnāt thought of before.
But this is just like a list of considerations and options. I donāt know how to actually weigh it all up to work out whatās best.
[1] I assume you mean EA Funds rather than the EA Forum or the Effective Altruism Foundationālots of EAFs floating about!
[2] I only give this criterion because of the particular context and goals at hand; there are of course many actors outside the EA community whose beliefs we should attend to.
Do you mean questions like āwhat will the state of AI/āclimate change/āpandemics beā (as Khorton suggests), or things like āHow big a deal will Group A think X isā? I assume the former?
Iām not sure I know what you mean by this (particularly the part after the comma).
Yes.
The not was in the wrong place, have fixed now.
I had briefly got in touch with rethink about trying to predict survey outcomes, but Iām not going ahead with this for now as the conerns your raised seem bad if low probability. Iām consider, as an alternative, asking about the donation split of EAF in ~5 years, which I think kind of tracks related ideas but seems to have less downside risk of the form you describe.
To lay out my tentative position a bit more:
I think forecasts about what some actor (a person, organisation, community, etc.) will overall believe in future about X can add value compared to just having a large set of forecasts about specific events that are relevant to X. This is because the former type of forecast can also account for:
how the actor will interpret the evidence that those specific events provide regarding X
lots of events we might not think to specifically forecast that could be relevant to X
On the other hand, forecasts about what some actor will believe in future about X seem more at risk of causing undesirable feedback loops and distorted beliefs than forecasts about specific events relevant to X do.
I think forecasting the donation split of the EA Funds[1] would be interesting, and could be useful. This seems to be a forecast of a specific event thatās unusually well correlated with an actorās overall beliefs. I think that means it would have more of both the benefits and the risks mentioned above than the typical forecast of a specific event would, but less than a forecast thatās directly about an actorās overall belief would.
This also makes me think that another thing potentially worth considering is predicting the beliefs of an actor which:
is a subset of the EA community[2]
seems to have a good process of forming beliefs
seems likely to avoid updating problematically based on the forecast
Some spitballed examples, to illustrate the basic idea: Paul Christiano, Toby Ord, a survey of CEA staff, a survey of Open Phil staff.
This would still pose a risk of causing the EA community to update too strongly on erroneous forecasts of what this actor will believe. But it seems to at least reduce the risk of self-fulfilling prophecies/āfeedback loops, which somewhat blunts the effect.
Iām pretty sure this sort of thing has been done before (e.g., sort-of, here). But this is a rationale for doing it that I hadnāt thought of before.
But this is just like a list of considerations and options. I donāt know how to actually weigh it all up to work out whatās best.
[1] I assume you mean EA Funds rather than the EA Forum or the Effective Altruism Foundationālots of EAFs floating about!
[2] I only give this criterion because of the particular context and goals at hand; there are of course many actors outside the EA community whose beliefs we should attend to.