Thanks for doing this, great idea! I think Metaculus could provide some valuable insight into how society’s/EA’s/philosophy’s values might drift or converge over the coming decades.
For instance, I’m curious about where population ethics will be in 10-25 years. Something like, ‘In 2030 will the consensus within effective altruism be that “Total utilitarianism is closer to describing our best moral theories than average utilitarianism and person affecting views”?’
Having your insight on how to operationalize this would be useful, since I’m not very happy with my ideas: 1. Polling FHI and GW 2. A future PhilPapers Survey if there is one 3. Some sort of citation count/ number of papers on total/average/person utilitarianism. It would probably also be useful to get the opinion of a population ethicist.
Stepping back from that specific question, I think Metaculus could play a sort of sanity-checking, outside-view role for EA. Questions like ‘Will EA see AI risk (climate change/bio-risk/etc.) as less pressing in 2030 than they do now?‘, or ‘Will EA in 2030 believe that EA should’ve invested more and donated less over the 2020s?’
Somewhat unrelated, but I’ll leave this thought here anyway: Maybe EA metaculus users could perhaps benefit from posting question drafts as short-form posts on the EA forum.