Thanks for writing this up, and I’m excited about FutureSearch! I agree with most of this, but I’m not sure framing it as more in-depth forecasting is the most natural given how people generally use the word forecasting in EA circles (i.e. associated with Tetlock-style superforecasting, often aggregation of very part-time forecasters’ views, etc.). It might be imo more natural to think of it as being a need for in-depth research, perhaps with a forecasting flavor. Here’s part of a comment I left on a draft.
However, I kind of think the framing of the essay is wrong [ETA: I might hedge wrong a bit if writing on EAF :p] in that it categorizes a thing as “forecasting” that I think is more naturally categorized as “research” to avoid confusion. See point (2)(a)(ii) at https://www.foxy-scout.com/forecasting-interventions/ ; basically I think calling “forecasting” anything where you slap a number on the end is confusing, because basically every intellectual task/decision can be framed as forecasting.
It feels like this essay is overall arguing that AI safety macrostrategy research is more important than AI safety superforecasting (and the superforecasting is what EAs mean when they say “forecasting”). I don’t think the distinction being pointed to here is necessarily whether you put a number at the end of your research project (though I think that’s usually useful as well), but the difference between deep research projects and Tetlock-style superforecasting.
I don’t think they are necessarily independent btw, they might be complementary (see https://www.foxy-scout.com/forecasting-interventions/ (6)(b)(ii) ), but I agree with you that the research is generally more important to focus on at the current margin.
[...] Like, it seems more intuitive to call https://arxiv.org/abs/2311.08379 a research project rather than forecasting project even though one of the conclusions is a forecast (because as you say, the vast majority of the value of that research doesn’t come from the number at the end).
Agreed Eli, I’m still working to understand where the forecasting ends and the research begins. You’re right, the distinction is not whether you put a number at the end of your research project.
In AGI (or other hard sciences) the work may be very different, and done by different people. But in other fields, like geopolitics, I see Tetlock-style forecasting as central, even necessary, for research.
At the margin, I think forecasting should be more research-y in every domain, including AGI. Otherwise I expect AGI forecasts will continue to be used, while not being very useful.
Thanks for writing this up, and I’m excited about FutureSearch! I agree with most of this, but I’m not sure framing it as more in-depth forecasting is the most natural given how people generally use the word forecasting in EA circles (i.e. associated with Tetlock-style superforecasting, often aggregation of very part-time forecasters’ views, etc.). It might be imo more natural to think of it as being a need for in-depth research, perhaps with a forecasting flavor. Here’s part of a comment I left on a draft.
Agreed Eli, I’m still working to understand where the forecasting ends and the research begins. You’re right, the distinction is not whether you put a number at the end of your research project.
In AGI (or other hard sciences) the work may be very different, and done by different people. But in other fields, like geopolitics, I see Tetlock-style forecasting as central, even necessary, for research.
At the margin, I think forecasting should be more research-y in every domain, including AGI. Otherwise I expect AGI forecasts will continue to be used, while not being very useful.