I agree this is really strange. I agree many ai people supposedly into safety don’t seem to givemuch thought to the more obvious policies, at least publicly (unless someone can signpost).
Why not move national security research funding from ai development and application to safety research?
Why not call out the risks and bring more skepticism to a. The hope of ever achieving aligned AI, and b. That aligned AI really improving the human condition anyway, while reminding people of the risks?
Why not ask all companies or industry researchers t apply for a permit with some prior training in risks or safety prior to them working on anything more advanced than basic statistical algorithms?Or even professional registration? Just slow it down and make I more expensive. These bodies can be set up intemationally without having to be passed into law.
Why not tempt coders and researchers who are making particularly good.progress, to work on something else? This could be done around the world like counter recruitment in espionage or competitive industries.