Thanks for doing this, great idea! I think Metaculus could provide some valuable insight into how society’s/EA’s/philosophy’s values might drift or converge over the coming decades.
For instance, I’m curious about where population ethics will be in 10-25 years. Something like, ‘In 2030 will the consensus within effective altruism be that “Total utilitarianism is closer to describing our best moral theories than average utilitarianism and person affecting views”?’
Having your insight on how to operationalize this would be useful, since I’m not very happy with my ideas: 1. Polling FHI and GW 2. A future PhilPapers Survey if there is one 3. Some sort of citation count/ number of papers on total/average/person utilitarianism. It would probably also be useful to get the opinion of a population ethicist.
Stepping back from that specific question, I think Metaculus could play a sort of sanity-checking, outside-view role for EA. Questions like ‘Will EA see AI risk (climate change/bio-risk/etc.) as less pressing in 2030 than they do now?‘, or ‘Will EA in 2030 believe that EA should’ve invested more and donated less over the 2020s?’
I’d also be interested in forecasts on these topics.
I think Metaculus could play a sort of sanity-checking, outside-view role for EA. Questions like ‘Will EA see AI risk (climate change/bio-risk/etc.) as less pressing in 2030 than they do now?‘, or ‘Will EA in 2030 believe that EA should’ve invested more and donated less over the 2020s?’
It seems to me that there’d be a risk of self-fulfilling prophecies.
That is, we’d hope that what’d happen is:
a bunch of forecasters predict what the EA community would end up believing after a great deal of thought, debate, analysis, etc.
then we can update ourselves closer to believing that thing already, which could help us get to better decisions faster.
...But what might instead happen is:
a relatively small group of forecasters makes relatively unfounded forecasts
then the EA community—which is relatively small, unusually connected to Metaculus, and unusually interested in forecasts—updates overly strongly on those forecasts, thus believing something that they wouldn’t otherwise have believed and don’t have good reasons to believe
I’m not saying the latter scenario is more likely than the former, nor that this means we shouldn’t solicit these forecasts. But the latter scenario seems likely enough to perhaps be an argument against soliciting these forecasts, and to at least be worth warning readers about clearly and repeatedly if these forecasts are indeed solicited.
Also, this might be especially bad if EAs start noticing that community beliefs are indeed moving towards the forecasted future beliefs, and don’t account sufficiently well for the possibility that this is just a self-fulfilling prophecy, and thus increase the weight they assign to these forecasts. (There could perhaps be a feedback loop.)
I imagine there’s always some possibility that forecasts will influence reality in a way that makes the forecasts more or less likely to come true that they would’ve been otherwise. But this seems more-than-usually-likely when forecasting EA community beliefs (compared to e.g. forecasting geopolitical events).
I agree this would be a genuine problem. I think it would be a little less of a problem if the question being forecasted wasn’t about the EA communities beliefs but instead something about the state of AI/climate change/pandemics themselves
This is really interesting, and potentially worth my abandoning the plan to write some questions on the outcomes of future EA surveys.
The difficulty with “what will people in general think about X” type questions is how to operationalise them, but there’s potentially enough danger in doing this for it not to be worth the tradeoff. I’m interested in more thoughts here.
In terms of “how big a deal will X be, there are several questions already of that form. The Metaculus search function is not amazing, so I’m happy to dig things out if there are areas of particular interest, though several are mentioned elsewhere in this thread.
In terms of “how big a deal will X be, there are several questions already of that form.
Do you mean questions like “what will the state of AI/climate change/pandemics be” (as Khorton suggests), or things like “How big a deal will Group A think X is”? I assume the former?
The difficulty with “what will people in general think about X” type questions is how to operationalise them, but there’s potentially enough danger in not doing this for it to be worth the tradeoff.
I’m not sure I know what you mean by this (particularly the part after the comma).
I’m not sure you I know what you mean by this (particularly the part after the comma).
The not was in the wrong place, have fixed now.
I had briefly got in touch with rethink about trying to predict survey outcomes, but I’m not going ahead with this for now as the conerns your raised seem bad if low probability. I’m consider, as an alternative, asking about the donation split of EAF in ~5 years, which I think kind of tracks related ideas but seems to have less downside risk of the form you describe.
I think forecasts about what some actor (a person, organisation, community, etc.) will overall believe in future about X can add value compared to just having a large set of forecasts about specific events that are relevant to X. This is because the former type of forecast can also account for:
how the actor will interpret the evidence that those specific events provide regarding X
lots of events we might not think to specifically forecast that could be relevant to X
On the other hand, forecasts about what some actor will believe in future about X seem more at risk of causing undesirable feedback loops and distorted beliefs than forecasts about specific events relevant to X do.
I think forecasting the donation split of the EA Funds[1] would be interesting, and could be useful. This seems to be a forecast of a specific event that’s unusually well correlated with an actor’s overall beliefs. I think that means it would have more of both the benefits and the risks mentioned above than the typical forecast of a specific event would, but less than a forecast that’s directly about an actor’s overall belief would.
This also makes me think that another thing potentially worth considering is predicting the beliefs of an actor which:
is a subset of the EA community[2]
seems to have a good process of forming beliefs
seems likely to avoid updating problematically based on the forecast
Some spitballed examples, to illustrate the basic idea: Paul Christiano, Toby Ord, a survey of CEA staff, a survey of Open Phil staff.
This would still pose a risk of causing the EA community to update too strongly on erroneous forecasts of what this actor will believe. But it seems to at least reduce the risk of self-fulfilling prophecies/feedback loops, which somewhat blunts the effect.
I’m pretty sure this sort of thing has been done before (e.g., sort-of, here). But this is a rationale for doing it that I hadn’t thought of before.
But this is just like a list of considerations and options. I don’t know how to actually weigh it all up to work out what’s best.
[1] I assume you mean EA Funds rather than the EA Forum or the Effective Altruism Foundation—lots of EAFs floating about!
[2] I only give this criterion because of the particular context and goals at hand; there are of course many actors outside the EA community whose beliefs we should attend to.
The best operationalisation here I can see is asking that we are able to attach a few questions if this form to the 2030 EA survey, then asking users to predict what the results will be. If we can get some sort of pre-commitment from whoever runs the survey to include the answers, even better.
One thing to think about (and maybe for people to weigh in on here) is that as you get further out in time there’s less and less evidence that forecasting performs well. It’s worth considering a 2025 date for these sorts of questions too for that reason.
Another operationalisation would be to ask to what extent the 80k top career recommendations have changed, e.g. what percentage of the current top recommendations wills till be in the top recommendations in 10 years.
It looks like a different part of the survey asked about cause prioritisation directly, which seems like it could be closer to what you wanted, my current plan (5 questions) for how to use the survey is here.
Somewhat unrelated, but I’ll leave this thought here anyway: Maybe EA metaculus users could perhaps benefit from posting question drafts as short-form posts on the EA forum.
I’m kind of hoping that this thread ends up serving that purpose. There’s also a thread on metaculus where people can post ideas, the difference there is nobody’s promising to write them up, and they aren’t necessarily EA ideas, but I thought it was worth mentioning.
(I do have some thoughts on the top level answer here, but don’t have time to write them now, will do soon)
Thanks for doing this, great idea! I think Metaculus could provide some valuable insight into how society’s/EA’s/philosophy’s values might drift or converge over the coming decades.
For instance, I’m curious about where population ethics will be in 10-25 years. Something like, ‘In 2030 will the consensus within effective altruism be that “Total utilitarianism is closer to describing our best moral theories than average utilitarianism and person affecting views”?’
Having your insight on how to operationalize this would be useful, since I’m not very happy with my ideas: 1. Polling FHI and GW 2. A future PhilPapers Survey if there is one 3. Some sort of citation count/ number of papers on total/average/person utilitarianism. It would probably also be useful to get the opinion of a population ethicist.
Stepping back from that specific question, I think Metaculus could play a sort of sanity-checking, outside-view role for EA. Questions like ‘Will EA see AI risk (climate change/bio-risk/etc.) as less pressing in 2030 than they do now?‘, or ‘Will EA in 2030 believe that EA should’ve invested more and donated less over the 2020s?’
I’d also be interested in forecasts on these topics.
It seems to me that there’d be a risk of self-fulfilling prophecies.
That is, we’d hope that what’d happen is:
a bunch of forecasters predict what the EA community would end up believing after a great deal of thought, debate, analysis, etc.
then we can update ourselves closer to believing that thing already, which could help us get to better decisions faster.
...But what might instead happen is:
a relatively small group of forecasters makes relatively unfounded forecasts
then the EA community—which is relatively small, unusually connected to Metaculus, and unusually interested in forecasts—updates overly strongly on those forecasts, thus believing something that they wouldn’t otherwise have believed and don’t have good reasons to believe
(Perhaps this is like a time-travelling information cascade?)
I’m not saying the latter scenario is more likely than the former, nor that this means we shouldn’t solicit these forecasts. But the latter scenario seems likely enough to perhaps be an argument against soliciting these forecasts, and to at least be worth warning readers about clearly and repeatedly if these forecasts are indeed solicited.
Also, this might be especially bad if EAs start noticing that community beliefs are indeed moving towards the forecasted future beliefs, and don’t account sufficiently well for the possibility that this is just a self-fulfilling prophecy, and thus increase the weight they assign to these forecasts. (There could perhaps be a feedback loop.)
I imagine there’s always some possibility that forecasts will influence reality in a way that makes the forecasts more or less likely to come true that they would’ve been otherwise. But this seems more-than-usually-likely when forecasting EA community beliefs (compared to e.g. forecasting geopolitical events).
I agree this would be a genuine problem. I think it would be a little less of a problem if the question being forecasted wasn’t about the EA communities beliefs but instead something about the state of AI/climate change/pandemics themselves
This is really interesting, and potentially worth my abandoning the plan to write some questions on the outcomes of future EA surveys.
The difficulty with “what will people in general think about X” type questions is how to operationalise them, but there’s potentially enough danger in doing this for it not to be worth the tradeoff. I’m interested in more thoughts here.
In terms of “how big a deal will X be, there are several questions already of that form. The Metaculus search function is not amazing, so I’m happy to dig things out if there are areas of particular interest, though several are mentioned elsewhere in this thread.
Do you mean questions like “what will the state of AI/climate change/pandemics be” (as Khorton suggests), or things like “How big a deal will Group A think X is”? I assume the former?
I’m not sure I know what you mean by this (particularly the part after the comma).
Yes.
The not was in the wrong place, have fixed now.
I had briefly got in touch with rethink about trying to predict survey outcomes, but I’m not going ahead with this for now as the conerns your raised seem bad if low probability. I’m consider, as an alternative, asking about the donation split of EAF in ~5 years, which I think kind of tracks related ideas but seems to have less downside risk of the form you describe.
To lay out my tentative position a bit more:
I think forecasts about what some actor (a person, organisation, community, etc.) will overall believe in future about X can add value compared to just having a large set of forecasts about specific events that are relevant to X. This is because the former type of forecast can also account for:
how the actor will interpret the evidence that those specific events provide regarding X
lots of events we might not think to specifically forecast that could be relevant to X
On the other hand, forecasts about what some actor will believe in future about X seem more at risk of causing undesirable feedback loops and distorted beliefs than forecasts about specific events relevant to X do.
I think forecasting the donation split of the EA Funds[1] would be interesting, and could be useful. This seems to be a forecast of a specific event that’s unusually well correlated with an actor’s overall beliefs. I think that means it would have more of both the benefits and the risks mentioned above than the typical forecast of a specific event would, but less than a forecast that’s directly about an actor’s overall belief would.
This also makes me think that another thing potentially worth considering is predicting the beliefs of an actor which:
is a subset of the EA community[2]
seems to have a good process of forming beliefs
seems likely to avoid updating problematically based on the forecast
Some spitballed examples, to illustrate the basic idea: Paul Christiano, Toby Ord, a survey of CEA staff, a survey of Open Phil staff.
This would still pose a risk of causing the EA community to update too strongly on erroneous forecasts of what this actor will believe. But it seems to at least reduce the risk of self-fulfilling prophecies/feedback loops, which somewhat blunts the effect.
I’m pretty sure this sort of thing has been done before (e.g., sort-of, here). But this is a rationale for doing it that I hadn’t thought of before.
But this is just like a list of considerations and options. I don’t know how to actually weigh it all up to work out what’s best.
[1] I assume you mean EA Funds rather than the EA Forum or the Effective Altruism Foundation—lots of EAFs floating about!
[2] I only give this criterion because of the particular context and goals at hand; there are of course many actors outside the EA community whose beliefs we should attend to.
The best operationalisation here I can see is asking that we are able to attach a few questions if this form to the 2030 EA survey, then asking users to predict what the results will be. If we can get some sort of pre-commitment from whoever runs the survey to include the answers, even better.
One thing to think about (and maybe for people to weigh in on here) is that as you get further out in time there’s less and less evidence that forecasting performs well. It’s worth considering a 2025 date for these sorts of questions too for that reason.
Another operationalisation would be to ask to what extent the 80k top career recommendations have changed, e.g. what percentage of the current top recommendations wills till be in the top recommendations in 10 years.
This question is now open.
How many of the “priority paths” identified by 80,000hours will still be priority paths in 2030?
I really like this and will work something out to this effect
Do you want to have a look at the 2019 EA survey and pick a few things it would be most useful to get predictions on? I’ll then write a few up.
I think the ‘Diets of EAs’ question could be a decent proxy for the prominence of animal welfare within EA. I think there are similar questions on metaculus for the general US population https://www.metaculus.com/questions/?order_by=-activity&search=vegetarian
I don’t see the ethics question as all that useful, since I think most of population ethics presupposes some form of consequentialism.
It looks like a different part of the survey asked about cause prioritisation directly, which seems like it could be closer to what you wanted, my current plan (5 questions) for how to use the survey is here.
Somewhat unrelated, but I’ll leave this thought here anyway: Maybe EA metaculus users could perhaps benefit from posting question drafts as short-form posts on the EA forum.
I’m kind of hoping that this thread ends up serving that purpose. There’s also a thread on metaculus where people can post ideas, the difference there is nobody’s promising to write them up, and they aren’t necessarily EA ideas, but I thought it was worth mentioning.
(I do have some thoughts on the top level answer here, but don’t have time to write them now, will do soon)