Multiplier effects: Delaying timelines by 1 year gives the entire alignment community an extra year to solve the problem.
In other words, if I am capable of doing an average amount of alignment work ¯x per unit time, and I have n units of time available before the development of transformative AI, I will have contributed ¯x∗n work. But if I expect to delay transformative AI by m units of time if I focus on it, everyone will have that additional time to do alignment work, which means my impact is ¯x∗m∗p, where p is the number of people doing work. Naively then, if m∗p>n, I should be focusing on buying time.[1]
This analysis further favours time-buying if the total amount of work per unit time accelerates, which is plausibly the case if e.g. the alignment community increases over time.
This post is perhaps the most important thing I’ve read on the EA forum. (Update: Ok, I’m less optimistic now, but still seems very promising.)
Instead of technical research, more people should focus on buying time
The main argument that I updated on was this:
In other words, if I am capable of doing an average amount of alignment work ¯x per unit time, and I have n units of time available before the development of transformative AI, I will have contributed ¯x∗n work. But if I expect to delay transformative AI by m units of time if I focus on it, everyone will have that additional time to do alignment work, which means my impact is ¯x∗m∗p, where p is the number of people doing work. Naively then, if m∗p>n, I should be focusing on buying time.[1]
This analysis further favours time-buying if the total amount of work per unit time accelerates, which is plausibly the case if e.g. the alignment community increases over time.
This assumes time-buying and direct alignment-work is independent, whereas I expect doing either will help with the other to some extent.