Great post. I think there is a key difference between animal advocacy (and the other social movements mentioned) and AI x-safety though, that isn’t being appreciated enough. Namely, that AI extinction risk is not about advocating for marginalised/oppressed groups, or those with no voice, or even anything related to “do gooding” at all. It’s no longer even about about saving future generations. It’s about personal survival for each and every one of us and our families and friends, in the near term future. Personal survival for the people that make up the corporations that are pushing AGI development. It’s not about ethics, or trade-offs between profit and CSR or ESG, or competition between companies and nations. It’s not even about MAD. It’s unilaterally assured destruction. No one can safely wield the technology. It needs to be curtailed globally.
With all this in mind, it should really be an easier sell to get corporations to stop. I’m not hopeful that this will happen without government regulation and international treaties though (because of attitudes like this). Hopefully that will happen in time, but we need to be doing all we can to push for it (including corporate campaigns).
You’re right that there are big differences. I’m inclined to agree that some asks should be an “easier sell” too. I’m wondering if you think that these differences notably affect the arguments of this post?
I think potentially they do, in terms of the typical playbook and framing for corporate campaigns being less relevant. As in, it’s less moral outrage vs. profit and appealing to corporations to be good, and more reckless endangerment vs. naive optimism and appealing to people at corporations (who presumably care about their own safety) to see sense. Morality/ethics doesn’t need to be a factor, assuming people care for their own lives and those of their family and friends.
Great post. I think there is a key difference between animal advocacy (and the other social movements mentioned) and AI x-safety though, that isn’t being appreciated enough. Namely, that AI extinction risk is not about advocating for marginalised/oppressed groups, or those with no voice, or even anything related to “do gooding” at all. It’s no longer even about about saving future generations. It’s about personal survival for each and every one of us and our families and friends, in the near term future. Personal survival for the people that make up the corporations that are pushing AGI development. It’s not about ethics, or trade-offs between profit and CSR or ESG, or competition between companies and nations. It’s not even about MAD. It’s unilaterally assured destruction. No one can safely wield the technology. It needs to be curtailed globally.
With all this in mind, it should really be an easier sell to get corporations to stop. I’m not hopeful that this will happen without government regulation and international treaties though (because of attitudes like this). Hopefully that will happen in time, but we need to be doing all we can to push for it (including corporate campaigns).
You’re right that there are big differences. I’m inclined to agree that some asks should be an “easier sell” too. I’m wondering if you think that these differences notably affect the arguments of this post?
I think potentially they do, in terms of the typical playbook and framing for corporate campaigns being less relevant. As in, it’s less moral outrage vs. profit and appealing to corporations to be good, and more reckless endangerment vs. naive optimism and appealing to people at corporations (who presumably care about their own safety) to see sense. Morality/ethics doesn’t need to be a factor, assuming people care for their own lives and those of their family and friends.