This post makes the case that warning shots won’t change the picture in policy much, but I could imagine a world where some warning shot makes the leading AI labs decide to focus more on safety, or agree to slow down their deployment, without policy change occurring. Maybe this could buy a couple of years time for safety researchers?
This isn’t a well developed thought, just something that came to mind while reading.
This post makes the case that warning shots won’t change the picture in policy much, but I could imagine a world where some warning shot makes the leading AI labs decide to focus more on safety, or agree to slow down their deployment, without policy change occurring. Maybe this could buy a couple of years time for safety researchers?
This isn’t a well developed thought, just something that came to mind while reading.