I agree this would be a genuine problem. I think it would be a little less of a problem if the question being forecasted wasn’t about the EA communities beliefs but instead something about the state of AI/climate change/pandemics themselves
This is really interesting, and potentially worth my abandoning the plan to write some questions on the outcomes of future EA surveys.
The difficulty with “what will people in general think about X” type questions is how to operationalise them, but there’s potentially enough danger in doing this for it not to be worth the tradeoff. I’m interested in more thoughts here.
In terms of “how big a deal will X be, there are several questions already of that form. The Metaculus search function is not amazing, so I’m happy to dig things out if there are areas of particular interest, though several are mentioned elsewhere in this thread.
In terms of “how big a deal will X be, there are several questions already of that form.
Do you mean questions like “what will the state of AI/climate change/pandemics be” (as Khorton suggests), or things like “How big a deal will Group A think X is”? I assume the former?
The difficulty with “what will people in general think about X” type questions is how to operationalise them, but there’s potentially enough danger in not doing this for it to be worth the tradeoff.
I’m not sure I know what you mean by this (particularly the part after the comma).
I’m not sure you I know what you mean by this (particularly the part after the comma).
The not was in the wrong place, have fixed now.
I had briefly got in touch with rethink about trying to predict survey outcomes, but I’m not going ahead with this for now as the conerns your raised seem bad if low probability. I’m consider, as an alternative, asking about the donation split of EAF in ~5 years, which I think kind of tracks related ideas but seems to have less downside risk of the form you describe.
I think forecasts about what some actor (a person, organisation, community, etc.) will overall believe in future about X can add value compared to just having a large set of forecasts about specific events that are relevant to X. This is because the former type of forecast can also account for:
how the actor will interpret the evidence that those specific events provide regarding X
lots of events we might not think to specifically forecast that could be relevant to X
On the other hand, forecasts about what some actor will believe in future about X seem more at risk of causing undesirable feedback loops and distorted beliefs than forecasts about specific events relevant to X do.
I think forecasting the donation split of the EA Funds[1] would be interesting, and could be useful. This seems to be a forecast of a specific event that’s unusually well correlated with an actor’s overall beliefs. I think that means it would have more of both the benefits and the risks mentioned above than the typical forecast of a specific event would, but less than a forecast that’s directly about an actor’s overall belief would.
This also makes me think that another thing potentially worth considering is predicting the beliefs of an actor which:
is a subset of the EA community[2]
seems to have a good process of forming beliefs
seems likely to avoid updating problematically based on the forecast
Some spitballed examples, to illustrate the basic idea: Paul Christiano, Toby Ord, a survey of CEA staff, a survey of Open Phil staff.
This would still pose a risk of causing the EA community to update too strongly on erroneous forecasts of what this actor will believe. But it seems to at least reduce the risk of self-fulfilling prophecies/feedback loops, which somewhat blunts the effect.
I’m pretty sure this sort of thing has been done before (e.g., sort-of, here). But this is a rationale for doing it that I hadn’t thought of before.
But this is just like a list of considerations and options. I don’t know how to actually weigh it all up to work out what’s best.
[1] I assume you mean EA Funds rather than the EA Forum or the Effective Altruism Foundation—lots of EAFs floating about!
[2] I only give this criterion because of the particular context and goals at hand; there are of course many actors outside the EA community whose beliefs we should attend to.
I agree this would be a genuine problem. I think it would be a little less of a problem if the question being forecasted wasn’t about the EA communities beliefs but instead something about the state of AI/climate change/pandemics themselves
This is really interesting, and potentially worth my abandoning the plan to write some questions on the outcomes of future EA surveys.
The difficulty with “what will people in general think about X” type questions is how to operationalise them, but there’s potentially enough danger in doing this for it not to be worth the tradeoff. I’m interested in more thoughts here.
In terms of “how big a deal will X be, there are several questions already of that form. The Metaculus search function is not amazing, so I’m happy to dig things out if there are areas of particular interest, though several are mentioned elsewhere in this thread.
Do you mean questions like “what will the state of AI/climate change/pandemics be” (as Khorton suggests), or things like “How big a deal will Group A think X is”? I assume the former?
I’m not sure I know what you mean by this (particularly the part after the comma).
Yes.
The not was in the wrong place, have fixed now.
I had briefly got in touch with rethink about trying to predict survey outcomes, but I’m not going ahead with this for now as the conerns your raised seem bad if low probability. I’m consider, as an alternative, asking about the donation split of EAF in ~5 years, which I think kind of tracks related ideas but seems to have less downside risk of the form you describe.
To lay out my tentative position a bit more:
I think forecasts about what some actor (a person, organisation, community, etc.) will overall believe in future about X can add value compared to just having a large set of forecasts about specific events that are relevant to X. This is because the former type of forecast can also account for:
how the actor will interpret the evidence that those specific events provide regarding X
lots of events we might not think to specifically forecast that could be relevant to X
On the other hand, forecasts about what some actor will believe in future about X seem more at risk of causing undesirable feedback loops and distorted beliefs than forecasts about specific events relevant to X do.
I think forecasting the donation split of the EA Funds[1] would be interesting, and could be useful. This seems to be a forecast of a specific event that’s unusually well correlated with an actor’s overall beliefs. I think that means it would have more of both the benefits and the risks mentioned above than the typical forecast of a specific event would, but less than a forecast that’s directly about an actor’s overall belief would.
This also makes me think that another thing potentially worth considering is predicting the beliefs of an actor which:
is a subset of the EA community[2]
seems to have a good process of forming beliefs
seems likely to avoid updating problematically based on the forecast
Some spitballed examples, to illustrate the basic idea: Paul Christiano, Toby Ord, a survey of CEA staff, a survey of Open Phil staff.
This would still pose a risk of causing the EA community to update too strongly on erroneous forecasts of what this actor will believe. But it seems to at least reduce the risk of self-fulfilling prophecies/feedback loops, which somewhat blunts the effect.
I’m pretty sure this sort of thing has been done before (e.g., sort-of, here). But this is a rationale for doing it that I hadn’t thought of before.
But this is just like a list of considerations and options. I don’t know how to actually weigh it all up to work out what’s best.
[1] I assume you mean EA Funds rather than the EA Forum or the Effective Altruism Foundation—lots of EAFs floating about!
[2] I only give this criterion because of the particular context and goals at hand; there are of course many actors outside the EA community whose beliefs we should attend to.