I gave this a shot and it ended up being an easier sell than I expected:
“AI is getting increasingly big and important. The cutting edge work is now mainly being done by large corporations, and the demographics of the people who work on it are still overwhelmingly male and exclude many disadvantaged groups.
In addition to many other potential dangers, we already know that AI systems trained on data from society can unintentionally come to reflect the unjust biases of society: many of the largest and most impressive AI systems right now have this problem to some extent. A majority of the people working on AI research are quite privileged and many are willfully oblivious to the risks and dangers.
Overall, these corporations expect huge profits and power from developing advanced AI, and they’re recklessly pushing forward in improving its capabilities without sufficiently considering the harms it might cause.
We need to massively increase the amount of work we put into making these AI systems safe. We need a better understanding of how they work, how to make them reflect just values, and how to prevent possible harm, especially since any harm is likely to fall disproportionately on disadvantaged groups. And we even need to think about making the corporations building them slow down their development until we can be sure they’re not going to cause damage to society. The more powerful these AI systems become, the more serious the danger — so we need to start right now.”
I bet it would go badly if one tried to sell a social justice advocate on some kind of grand transhumanist vision of the far future, or even just on generic longtermism, but it’s possible to think about AI risk without those other commitments.
Try and sell me on AGI safety if I’m a social justice advocate! That’s a big one I come across.
I gave this a shot and it ended up being an easier sell than I expected:
“AI is getting increasingly big and important. The cutting edge work is now mainly being done by large corporations, and the demographics of the people who work on it are still overwhelmingly male and exclude many disadvantaged groups.
In addition to many other potential dangers, we already know that AI systems trained on data from society can unintentionally come to reflect the unjust biases of society: many of the largest and most impressive AI systems right now have this problem to some extent. A majority of the people working on AI research are quite privileged and many are willfully oblivious to the risks and dangers.
Overall, these corporations expect huge profits and power from developing advanced AI, and they’re recklessly pushing forward in improving its capabilities without sufficiently considering the harms it might cause.
We need to massively increase the amount of work we put into making these AI systems safe. We need a better understanding of how they work, how to make them reflect just values, and how to prevent possible harm, especially since any harm is likely to fall disproportionately on disadvantaged groups. And we even need to think about making the corporations building them slow down their development until we can be sure they’re not going to cause damage to society. The more powerful these AI systems become, the more serious the danger — so we need to start right now.”
I bet it would go badly if one tried to sell a social justice advocate on some kind of grand transhumanist vision of the far future, or even just on generic longtermism, but it’s possible to think about AI risk without those other commitments.