I recently shared a link to this piece[1] in the EA Newsletter (in the “Timeless Classics” section). The post had come up in some conversations I was having about how to think about AI timelines, and I also happened to come across the newer Twitter thread about it.
Cross-posting my brief summary here in case someone is interested (or wants to point out how it might be wrong:)).
Is it better to work on risks close to when they would occur, or to get started as soon as possible?
In an analysis from 2014 (and a recent Twitter thread), Toby Ord explores the timing of different kinds of work on reducing risks, and notes some relevant factors:
Nearsightedness: the further away something is, the more uncertainties we have, meaning that our efforts could be misguided.
Course setting: it is harder to redirect a big effort later on, and it can make sense to spend a lot of resources early to lay the groundwork that usefully directs later work.
Self-improvement: skill-building or other lasting improvements to your capacities that require only a small amount of upkeep are useful to work on early.
Growth (movement-building): early efforts can significantly increase the resources that are available to work on the problem when it is looming.
I recently shared a link to this piece[1] in the EA Newsletter (in the “Timeless Classics” section). The post had come up in some conversations I was having about how to think about AI timelines, and I also happened to come across the newer Twitter thread about it.
Cross-posting my brief summary here in case someone is interested (or wants to point out how it might be wrong:)).
Although I went with a non-Forum link.