Argument summary: Existential risk may hinge on whether AGI development is centralized to a single major project, where centralization is good because it gives this project more time for safety work and securing the world. I look at some arguments about whether AI development sooner or later is better for centralization, and overall I think the answer is unclear.
Yeah, I agree that it seems unclear. But I feel like the current state of things is clearly suboptimal and if we need something extraordinary to happen to get the AI transition right, that only has a chance of happening with more time. I’m envisioning something like “globally-coordinated ban of large training runs + CERN-like alignment project with well-integrated safety evals to ensure the focus remains on alignment research before accidentally creating AI agents with dangerous capabilities.” (Maybe we don’t need this degree of coordination, but it’s the sort of thing we definitely can’t achieve under very short timelines.)
Yeah, I agree that it seems unclear. But I feel like the current state of things is clearly suboptimal and if we need something extraordinary to happen to get the AI transition right, that only has a chance of happening with more time. I’m envisioning something like “globally-coordinated ban of large training runs + CERN-like alignment project with well-integrated safety evals to ensure the focus remains on alignment research before accidentally creating AI agents with dangerous capabilities.” (Maybe we don’t need this degree of coordination, but it’s the sort of thing we definitely can’t achieve under very short timelines.)