The strongest concern I have heard to this approach is the fact that as model algorithms improve, at some point it is possible to train and build human-level intelligence on anyone’s home laptop, which makes hardware monitoring and restricting trickier. While this is cause for concern, I don’t think this should distract us from pursuing a pause.
There are many ways to slow AI development, but I’m concerned that it’s misleading to label any of them as pauses. I doubt that the best policies will be able to delay superhuman AI by more than a couple of years.
A strictly enforced compute threshold seems like it would slow AI development by something like 2x or 4x. AI capability progress would continue via distributed training, and by increasing implementation efficiency.
Slowing AI development is likely good if the rules can be enforced well enough. My biggest concern is that laws will be carelessly written, with a result that most responsible AI labs obey their spirit, but that the least responsible lab will find loopholes to exploit.
That means proposals should focus carefully on trying to imagine ways that AI labs could evade compliance with the regulations.
Right. I was also concerned some of the proposals here might be misleading to be named ‘pauses’. Proposals to ‘significantly slow down development’ might be more accurate in that case.
Maybe that’s a better way to approach talking about pausing. See it more as a spectrum of stronger and weaker slowdown mechanisms?
There are many ways to slow AI development, but I’m concerned that it’s misleading to label any of them as pauses. I doubt that the best policies will be able to delay superhuman AI by more than a couple of years.
A strictly enforced compute threshold seems like it would slow AI development by something like 2x or 4x. AI capability progress would continue via distributed training, and by increasing implementation efficiency.
Slowing AI development is likely good if the rules can be enforced well enough. My biggest concern is that laws will be carelessly written, with a result that most responsible AI labs obey their spirit, but that the least responsible lab will find loopholes to exploit.
That means proposals should focus carefully on trying to imagine ways that AI labs could evade compliance with the regulations.
Right. I was also concerned some of the proposals here might be misleading to be named ‘pauses’. Proposals to ‘significantly slow down development’ might be more accurate in that case.
Maybe that’s a better way to approach talking about pausing. See it more as a spectrum of stronger and weaker slowdown mechanisms?