I think the results being surprising are indicative of EAs underestimating how likely this is. AI has many bad effects; social media, bias + discrimination, unemployment, deepfakes, etc. Plus I think sufficiently competent AI will seem scary to people; a lot of people aren’t really aware of recent developments but I think would be freaked out if they were. I think we should position ourselves to utilize this backlash if it happens.
Yes, I think that once AI systems start communicating with ordinary people through ordinary language, simulated facial expressions, and robot bodies, there will be a lot of ‘uncanny valley’ effects, spookiness, unease, and moral disgust in response.
And once technological unemployment from AI really starts to bite into blue collar and white collar jobs, people will not just say ‘Oh well! Life is meaningless now, and I have no status or self-respect, and my wife/husband thinks I’m a loser, but universal basic income makes everything OK!’
I agree, but with a caveat: EA should be willing to ditch any group that makes it a partisan issue, rather than a bipartisan consensus. Because I can easily see a version of this where it gets politicized, and AI safety starts to be viewed as a curse word similar to words like globalist, anti-racist, etc.
Tricky thing is, everything we can imagine tends to become a partisan, polarized issue, if it’s even slightly associated with any existing partisan, polarized positions, and if any political groups can gain any benefit from polarizing it.
I have trouble imagining a future in which AI and AI safety issues don’t become partisanized and polarized. The political incentives for doing so—in one direction or another—would just be too strong.
I think the results being surprising are indicative of EAs underestimating how likely this is. AI has many bad effects; social media, bias + discrimination, unemployment, deepfakes, etc. Plus I think sufficiently competent AI will seem scary to people; a lot of people aren’t really aware of recent developments but I think would be freaked out if they were. I think we should position ourselves to utilize this backlash if it happens.
Yes, I think that once AI systems start communicating with ordinary people through ordinary language, simulated facial expressions, and robot bodies, there will be a lot of ‘uncanny valley’ effects, spookiness, unease, and moral disgust in response.
And once technological unemployment from AI really starts to bite into blue collar and white collar jobs, people will not just say ‘Oh well! Life is meaningless now, and I have no status or self-respect, and my wife/husband thinks I’m a loser, but universal basic income makes everything OK!’
I agree, but with a caveat: EA should be willing to ditch any group that makes it a partisan issue, rather than a bipartisan consensus. Because I can easily see a version of this where it gets politicized, and AI safety starts to be viewed as a curse word similar to words like globalist, anti-racist, etc.
Tricky thing is, everything we can imagine tends to become a partisan, polarized issue, if it’s even slightly associated with any existing partisan, polarized positions, and if any political groups can gain any benefit from polarizing it.
I have trouble imagining a future in which AI and AI safety issues don’t become partisanized and polarized. The political incentives for doing so—in one direction or another—would just be too strong.