I agree that this strategy is underexplored. I would prioritize the following work in this direction as follows:
What kind of regulation would be sufficiently robust to slow down, or even pause, all AGI capabilities actors? This should include research/software regulation, hardware regulation, and data regulation. I think a main reason why many people think this strategy is unlikely to work is that they don’t believe any practical regulation would be sufficiently robust. But to my knowledge, that key assumption has never been properly investigated. It’s time we do so.
How could we practically implement sufficiently robust regulation? What would be required to do so?
How can we inform sufficiently large portions of society about AI xrisk to get robust regulation implemented? We are planning to do more research on this topic at the Existential Risk Observatory this year (we already have some first findings).
I agree that this strategy is underexplored. I would prioritize the following work in this direction as follows:
What kind of regulation would be sufficiently robust to slow down, or even pause, all AGI capabilities actors? This should include research/software regulation, hardware regulation, and data regulation. I think a main reason why many people think this strategy is unlikely to work is that they don’t believe any practical regulation would be sufficiently robust. But to my knowledge, that key assumption has never been properly investigated. It’s time we do so.
How could we practically implement sufficiently robust regulation? What would be required to do so?
How can we inform sufficiently large portions of society about AI xrisk to get robust regulation implemented? We are planning to do more research on this topic at the Existential Risk Observatory this year (we already have some first findings).