I’m not particularly well informed about current EA discourse on AI alignment, but I imagine that two possible strategies are
accelerating alignment research and staying friendly with the big AI companies
getting governments to slow AI development in a worldwide-coordinated way, even if this angers people at AI companies.
Yudkowsky’s article helps push on the latter approach. Making the public and governments more worried about AI risk does seem to me the most plausible way of slowing it down. If more people in the national-security community worry about AI risks, there could be a lot more attention to these issues, as well as the possibility of policies like limiting total computing power for AI training that only governments could pull off.
I expect a lot of AI developers would be angry about getting the public and governments more alarmed, but if the effort to raise alarm works well enough, then the AI developers will have to comply. OTOH, there’s also a possible “boy who cried wolf” situation in which AI progress continues, nothing that bad happens for a few years, and then people assume the doomsayers were overreacting—making it harder to ring alarm bells the next time.
I’m not particularly well informed about current EA discourse on AI alignment, but I imagine that two possible strategies are
accelerating alignment research and staying friendly with the big AI companies
getting governments to slow AI development in a worldwide-coordinated way, even if this angers people at AI companies.
Yudkowsky’s article helps push on the latter approach. Making the public and governments more worried about AI risk does seem to me the most plausible way of slowing it down. If more people in the national-security community worry about AI risks, there could be a lot more attention to these issues, as well as the possibility of policies like limiting total computing power for AI training that only governments could pull off.
I expect a lot of AI developers would be angry about getting the public and governments more alarmed, but if the effort to raise alarm works well enough, then the AI developers will have to comply. OTOH, there’s also a possible “boy who cried wolf” situation in which AI progress continues, nothing that bad happens for a few years, and then people assume the doomsayers were overreacting—making it harder to ring alarm bells the next time.