Thanks Ozzie for chatting! A few notes reflecting on places I think my arguments in the conversation were weak:
It’s unclear what short timelines would mean for AI-specific forecasting. If AI timelines are short it means you shouldn’t forecast non-AI things much, but it’s unclear what it means about forecasting AI stuff. There’s less time for effects to compound but you have more info and proximity to the most important decisions. It does discount non-AI forecasting a lot though, and some flavors of AI forecasting.
I also feel weird about the comparison I made between forecasting and waiting for things to happen in the world. There might be something to it, but I think it is valuable to force yourself to think deeply about what will happen, to help form better models of the world, in order to better interpret new events as they happen.
Thanks Ozzie for chatting! A few notes reflecting on places I think my arguments in the conversation were weak:
It’s unclear what short timelines would mean for AI-specific forecasting. If AI timelines are short it means you shouldn’t forecast non-AI things much, but it’s unclear what it means about forecasting AI stuff. There’s less time for effects to compound but you have more info and proximity to the most important decisions. It does discount non-AI forecasting a lot though, and some flavors of AI forecasting.
I also feel weird about the comparison I made between forecasting and waiting for things to happen in the world. There might be something to it, but I think it is valuable to force yourself to think deeply about what will happen, to help form better models of the world, in order to better interpret new events as they happen.