Awesome to hear! I’m happy that OpenPhil has promoted forecasting to its own dedicated cause area with its own team; I’m hoping this provides more predictable funding for EA forecasting work, which otherwise has felt a bit like a neglected stepchild compared to GCR/GHD/AW. I’ve spoken with both Ben and Javier, who are both very dedicated to the cause of forecasting, and am excited to see what their team does this year!
Preventing catastrophic risks, improving global health and improving animal welfare are goals in themselves. At best, forecasting is a meta topic that supports other goals
Yes, it’s a meta topic; I’m commenting less on the importance of forecasting in an ITN framework and more on its neglectedness. This stuff basically doesn’t get funding outside of EA, and even inside EA had no institutional commitment; outside of random one-of grants, the largest forecasting funding program I’m aware of over the last 2 years were $30k in “minigrants” funded by Scott Alexander out of pocket.
But on the importance of it: insofar as you think future people matter and that we have the ability and responsibility to help them, forecasting the future is paramount. Steering today’s world without understanding the future would be like trying to help people in Africa, but without overseas reporting to guide you—you’ll obviously do worse if you can’t see outcomes of your actions.
You can make a reasonable argument (as some other commenters do!) that the tractability of forecasting to date hasn’t been great; I agree that the most common approaches of “tournament setting forecasting” or “superforecaster consulting” haven’t produced much of decision-relevance. But there are many other possible approaches (eg FutureSearch.ai is doing interesting things using an LLM to forecast), and I’m again excited to see what Ben and Javier do here.
Yes, it’s a meta topic; I’m commenting less on the importance of forecasting in an ITN framework and more on its neglectedness. This stuff basically doesn’t get funding outside of EA, and even inside EA had no institutional commitment;
I don’t think it’s necessary to talk in terms of an ITN framework but something being neglected isn’t nearly reason enough to fund it. Neglectedness is perhaps the least important part of the framework and something being neglected alone isn’t a reason to fund it. Getting 6 year olds in race cars for example seems like a neglected cause but one that isn’t worth pursuing.
I think something not getting funding outside of EA is probably a medium-sized update to the thing not being important enough to work on. Things start to get EA funding once a sufficient number of the community finds the arguments for working on a problem sufficiently convincing. But many many many problems have come across EA’s eyes and very few of them have stuck. For something to not get funding from others suggests that very few others found it to be important.
Yeah, I agree neglectedness is less important but it does capture something important; I think eg climate change is both important and tractable but not neglected. In my head, “importance” is about “how much would a perfectly rational world direct at this?” while “neglected” is “how far are we from that world?”.
Also agreed that the lack of external funding is an update that forecasting (as currently conceived) has more hype than real utility. I tend to think this is because of the narrowness of how forecasting is currently framed, though (see my comments on tractability above)
That’s a great resource I wasn’t aware of, thanks (did you make it?). I do think that OpenPhil has spent a commendable amount of money on forecasting to date (though: nowhere near half Animal Welfare, more like a tenth). But I think this has been done very unsystematically, with no dedicated grantmaker. My understanding it was like, a side project of Luke Muehlhauser for a long time; when I reached out in Jan ’23 he said they were not making new forecasting grants until they filled this role. Even if it took a year, I’m glad this program is now launched!
I think your point 1 is a good starting point but I would add “in percentage terms compared to all other potential causes” and you have to be in the top 1% of that for EA to consider the cause neglected.
3. I didn’t make it. It is great though. I was talking about on a yearly basis in the last couple years. That said, I made the comment off memory so I could be wrong.
Awesome to hear! I’m happy that OpenPhil has promoted forecasting to its own dedicated cause area with its own team; I’m hoping this provides more predictable funding for EA forecasting work, which otherwise has felt a bit like a neglected stepchild compared to GCR/GHD/AW. I’ve spoken with both Ben and Javier, who are both very dedicated to the cause of forecasting, and am excited to see what their team does this year!
Preventing catastrophic risks, improving global health and improving animal welfare are goals in themselves. At best, forecasting is a meta topic that supports other goals
Yes, it’s a meta topic; I’m commenting less on the importance of forecasting in an ITN framework and more on its neglectedness. This stuff basically doesn’t get funding outside of EA, and even inside EA had no institutional commitment; outside of random one-of grants, the largest forecasting funding program I’m aware of over the last 2 years were $30k in “minigrants” funded by Scott Alexander out of pocket.
But on the importance of it: insofar as you think future people matter and that we have the ability and responsibility to help them, forecasting the future is paramount. Steering today’s world without understanding the future would be like trying to help people in Africa, but without overseas reporting to guide you—you’ll obviously do worse if you can’t see outcomes of your actions.
You can make a reasonable argument (as some other commenters do!) that the tractability of forecasting to date hasn’t been great; I agree that the most common approaches of “tournament setting forecasting” or “superforecaster consulting” haven’t produced much of decision-relevance. But there are many other possible approaches (eg FutureSearch.ai is doing interesting things using an LLM to forecast), and I’m again excited to see what Ben and Javier do here.
I don’t think it’s necessary to talk in terms of an ITN framework but something being neglected isn’t nearly reason enough to fund it. Neglectedness is perhaps the least important part of the framework and something being neglected alone isn’t a reason to fund it. Getting 6 year olds in race cars for example seems like a neglected cause but one that isn’t worth pursuing.
I think something not getting funding outside of EA is probably a medium-sized update to the thing not being important enough to work on. Things start to get EA funding once a sufficient number of the community finds the arguments for working on a problem sufficiently convincing. But many many many problems have come across EA’s eyes and very few of them have stuck. For something to not get funding from others suggests that very few others found it to be important.
Forecasting still seems to get a fair amount of dollars, probably about half as much as animal welfare. https://docs.google.com/spreadsheets/d/1ip7nXs7l-8sahT6ehvk2pBrlQ6Umy5IMPYStO3taaoc/edit?usp=sharing
Your points on helping future people (and non-human animals) are well taken.
Yeah, I agree neglectedness is less important but it does capture something important; I think eg climate change is both important and tractable but not neglected. In my head, “importance” is about “how much would a perfectly rational world direct at this?” while “neglected” is “how far are we from that world?”.
Also agreed that the lack of external funding is an update that forecasting (as currently conceived) has more hype than real utility. I tend to think this is because of the narrowness of how forecasting is currently framed, though (see my comments on tractability above)
That’s a great resource I wasn’t aware of, thanks (did you make it?). I do think that OpenPhil has spent a commendable amount of money on forecasting to date (though: nowhere near half Animal Welfare, more like a tenth). But I think this has been done very unsystematically, with no dedicated grantmaker. My understanding it was like, a side project of Luke Muehlhauser for a long time; when I reached out in Jan ’23 he said they were not making new forecasting grants until they filled this role. Even if it took a year, I’m glad this program is now launched!
I think your point 1 is a good starting point but I would add “in percentage terms compared to all other potential causes” and you have to be in the top 1% of that for EA to consider the cause neglected.
3. I didn’t make it. It is great though. I was talking about on a yearly basis in the last couple years. That said, I made the comment off memory so I could be wrong.