Do you think it’s a serious enough issue to warrant some...not very polite responses?
Maybe it would be better if policy makers just go and shut AI research down immediately instead of trying to make reforms and regulations to soften its impact?
Maybe this information (that AI researchers themselves are increasingly pessimistic about the outcome) could sway public opinion enough to that point?
Just as anti-AI violence would be counter-productive, in terms of creating a public backlash against the violent anti-AI activists, I would bet (with only low-to-moderate confidence) that an authoritarian government crackdown on AI would also provoke a public backlash, especially among small-government conservatives, libertarians, and anti-police liberals.
I think public sentiment would need to tip against AI first, and then more serious regulations and prohibitions could follow. So, if we’re concerned about AI X-risk, we’d need to get the public to morally stigmatize AI R&D first—which I think would not be as hard as we expect.
I think we’re at the stage now where we should be pushing for a global moratorium on AGI research. Getting the public on board morally stigmatizing it is an important part of this (cf. certain bio research like human genetic engineering).
I suspect that all three political groups (maybe not the libertarians) you mentioned could be convinced to turn collectively against AI research. Afterall, governmental capacity is probably the first thing that will benefit significantly from more powerful AIs, and that could be scary enough for ordinary people or even socialists.
Perhaps the only guaranteed opposition for pausing AI research would come from the relevant corporations themselves (they are, of course, immensely powerful. But maybe they’ll accept an end of this arms race anyway), their dependents, and maybe some sections of libertarians and progressives (but I doubt there are that many of them committed to supporting AI research).
The public opinion is probably not very positive about AI research, but also perhaps a bit apathetic about what’s happening. Maybe the information in this survey, properly presented in a news article or something, could rally some public support for AI restrictions.
Public sentiment is already mostly against AI when public sentiment has an opinion. Though its not a major political issue (yet) so people may not be thinking about it. If it turns into a major political issue (there are ways of regulating AI without turning it into a major political issue, and you probably want to do so), then it will probably become 50⁄50 due to what politics does to everything.
Do you think it’s a serious enough issue to warrant some...not very polite responses?
Maybe it would be better if policy makers just go and shut AI research down immediately instead of trying to make reforms and regulations to soften its impact?
Maybe this information (that AI researchers themselves are increasingly pessimistic about the outcome) could sway public opinion enough to that point?
Just as anti-AI violence would be counter-productive, in terms of creating a public backlash against the violent anti-AI activists, I would bet (with only low-to-moderate confidence) that an authoritarian government crackdown on AI would also provoke a public backlash, especially among small-government conservatives, libertarians, and anti-police liberals.
I think public sentiment would need to tip against AI first, and then more serious regulations and prohibitions could follow. So, if we’re concerned about AI X-risk, we’d need to get the public to morally stigmatize AI R&D first—which I think would not be as hard as we expect.
I think we’re at the stage now where we should be pushing for a global moratorium on AGI research. Getting the public on board morally stigmatizing it is an important part of this (cf. certain bio research like human genetic engineering).
I suspect that all three political groups (maybe not the libertarians) you mentioned could be convinced to turn collectively against AI research. Afterall, governmental capacity is probably the first thing that will benefit significantly from more powerful AIs, and that could be scary enough for ordinary people or even socialists.
Perhaps the only guaranteed opposition for pausing AI research would come from the relevant corporations themselves (they are, of course, immensely powerful. But maybe they’ll accept an end of this arms race anyway), their dependents, and maybe some sections of libertarians and progressives (but I doubt there are that many of them committed to supporting AI research).
The public opinion is probably not very positive about AI research, but also perhaps a bit apathetic about what’s happening. Maybe the information in this survey, properly presented in a news article or something, could rally some public support for AI restrictions.
Public sentiment is already mostly against AI when public sentiment has an opinion. Though its not a major political issue (yet) so people may not be thinking about it. If it turns into a major political issue (there are ways of regulating AI without turning it into a major political issue, and you probably want to do so), then it will probably become 50⁄50 due to what politics does to everything.