[Question] What is AI Safety’s line of retreat?

Given we’re not on track to control fully autonomous systems to stay safe, what do we do?

Let’s say alignment research will not catch up with capability development. For any combination of reasons: corporations are scaling too fast, there are too many lethal-if-unsolved subproblems we have at most made partial process on, some subproblems can only be solved in sequential order, or there are hard limits capping the progress we can make on the control problem.

What do we do? With ‘we’, you can consider yourself, specific organisations, or the community coordinating roughly as a whole.

Crossposted to LessWrong (0 points, 0 comments)