This is a very good post that identifies a big PR problem for AI safety research.
Your key takeaway might be somewhat buried in the last half of the essay, so let’s see if I draw out the point more vividly (and maybe hyperbolically):
Tens (hundreds?) of millions of centrist, conservative, and libertarian people around the world don’t trust Big Tech censorship because it’s politically biased in favor of the Left, and it exemplifies a ‘codding culture’ that treats everyone as neurotic snowflakes, and that treats offensive language as a form of ‘literal violence’. Such people see that a lot of these lefty, coddling Big Tech values have soaked into AI research, e.g. the moral panic about ‘algorithmic bias’, and the increased emphasis on ‘diversity, equity, and inclusion’ rhetoric in AI conferences.
This has created a potentially dangerous mismatch in public perception between what the more serious AI safety researchers think they’re doing (e.g. reducing X risk from AGI), and what the public thinks AI safety is doing (e.g. developing methods to automate partisan censorship, to embed woke values into AI systems, and to create new methods for mass-customized propaganda).
I agree that AI alignment research that is focused on global, longtermist issues such as X risk should be careful to distance itself from ‘AI safety’ research that focuses on more transient, culture-bound, politically partisan issues, such as censoring ‘offensive’ images and ideas.
And, if we want to make benevolent AI censorship a new cause area for EA to pursue, we should be extremely careful about the political PR problems that would raise for our movement.
This has created a potentially dangerous mismatch in public perception between what the more serious AI safety researchers think they’re doing (e.g. reducing X risk from AGI), and what the public thinks AI safety is doing (e.g. developing methods to automate partisan censorship, to embed woke values into AI systems, and to create new methods for mass-customized propaganda).
This is the crux of the problem, yes. I don’t think this is because of a “conservative vs liberal” political rift though; the left is just as frustrated by, say, censorship of sex education or queer topics as the right may be upset by censorship of “non-woke” discussion—what matters is that the particular triggers for people on what is appropriate or not to censor are extremely varied, both across populations and across time. I don’t think it’s necessary to bring politics into this as an explanatory factor (though it may of course exacerbate existing tension).
Yep, fair enough. I was trying to dramatize the most vehement anti-censorship sentiments in a US political context, from one side of the partisan spectrum. But you’re right that there are plenty of other anti-censorship concerns from many sides, on many issues, in many countries.
This is a very good post that identifies a big PR problem for AI safety research.
Your key takeaway might be somewhat buried in the last half of the essay, so let’s see if I draw out the point more vividly (and maybe hyperbolically):
Tens (hundreds?) of millions of centrist, conservative, and libertarian people around the world don’t trust Big Tech censorship because it’s politically biased in favor of the Left, and it exemplifies a ‘codding culture’ that treats everyone as neurotic snowflakes, and that treats offensive language as a form of ‘literal violence’. Such people see that a lot of these lefty, coddling Big Tech values have soaked into AI research, e.g. the moral panic about ‘algorithmic bias’, and the increased emphasis on ‘diversity, equity, and inclusion’ rhetoric in AI conferences.
This has created a potentially dangerous mismatch in public perception between what the more serious AI safety researchers think they’re doing (e.g. reducing X risk from AGI), and what the public thinks AI safety is doing (e.g. developing methods to automate partisan censorship, to embed woke values into AI systems, and to create new methods for mass-customized propaganda).
I agree that AI alignment research that is focused on global, longtermist issues such as X risk should be careful to distance itself from ‘AI safety’ research that focuses on more transient, culture-bound, politically partisan issues, such as censoring ‘offensive’ images and ideas.
And, if we want to make benevolent AI censorship a new cause area for EA to pursue, we should be extremely careful about the political PR problems that would raise for our movement.
This is the crux of the problem, yes. I don’t think this is because of a “conservative vs liberal” political rift though; the left is just as frustrated by, say, censorship of sex education or queer topics as the right may be upset by censorship of “non-woke” discussion—what matters is that the particular triggers for people on what is appropriate or not to censor are extremely varied, both across populations and across time. I don’t think it’s necessary to bring politics into this as an explanatory factor (though it may of course exacerbate existing tension).
Yep, fair enough. I was trying to dramatize the most vehement anti-censorship sentiments in a US political context, from one side of the partisan spectrum. But you’re right that there are plenty of other anti-censorship concerns from many sides, on many issues, in many countries.