I am aware of some 28 high-quality public surveys on AI this year so far. I collected them in this wiki page. Claims are mostly not justified here; the evidence is in that page.
Concern is high, as are pessimism, worry, perceived danger, and support for caution.
Support for regulation is high. Some surveys ask about AI policy proposals; here’s one good set of questions. Most people prefer slowing AI progress to accelerating it; most people support a pause on some AI development but not a ban on AI.[1]
AI is the most important political issue for <1% of people.[2] When asked in context of other specific threats, AI is seen as relatively unimportant; people are more concerned about nuclear war, climate change, and pandemics. Most people agree with the CAIS statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The most salient perceived risks or downsides of AI are
misinformation and deepfakes;
automation; and
surveillance, loss of privacy, and hacking.
Autonomous weapons, losing control of AI, and AI bias are less salient.
Survey results on extinction risk and time to human-level AI are striking but misleading. Attitudes on extinction risk by 2100 are very sensitive to framing (one survey finds median probability 1%, another finds 15%); true attitudes (insofar as they exist) must be much lower than 15% given that several other risks are seen as bigger. Expected time to human-level AI is nominally ~5 years but also seems very sensitive to framing, and 5 years is implausibly short given people’s other attitudes (in open-ended questions people don’t seem to think that AI will be transformative soon, many other issues and threats are seen as more important than AI, most people don’t think their job will be automated soon, etc.). Three surveys asking whether AI is already smarter than humans found 6%, 10%, and 22% agreement;[3]Lizardman’s Constant may be relatively high in surveys on AI.
What comes to mind when people think about AI includes robots, chatbots, text generation tools, and virtual assistants.
Optimism about AI is correlated with being young, male, college-educated, and being a Democrat. Support for regulation is similar between Democrats and Republicans.
- − 82% “We should go slowly and deliberately,” 8% “We should speed up development” (after reading brief arguments).
- - “It would be a good thing if AI progress was stopped or significantly slowed”: 62% agree, 26% disagree.
- − 72% “We should slow down the development and deployment of artificial intelligence,” 12% “We should more quickly develop and deploy artificial intelligence.”
Pausing:
- YouGov: A six-month pause on some kinds of AI development: 58% support, 23% oppose.
- Rethink Priorities: Pause on AI research: 51% support, 25% oppose (after reading pro- and anti- messages).
- YouGov: A six-month pause on some kinds of AI development: 30% strongly support, 26% somewhat support, 13% somewhat oppose, 7% strongly oppose.
- AI Policy Institute / YouGov: “A legally enforced pause on advanced artificial intelligence research”: 49% support, 25% oppose.
- YouGov: A six-month pause on some kinds of AI development: 58% strongly support, 22% somewhat support, 11% somewhat oppose, 8% strongly oppose.
Note that the AI Policy Institute is a pro-caution advocacy organization, and Public First, Data for Progress, and Rethink Priorities may have agendas too.
YouGov, Rethink Priorities, and Axios/MC, respectively. Axios/MC found 34% say humans are smarter than AI, 22% AI is smarter than humans, 16% humans and AI are equally smart, and 28% don’t know or have no opinion! But some respondents may have interpreted the Axios/MC question as asking about intelligence in the abstract rather than currently.
US public opinion on AI, September 2023
Link post
I am aware of some 28 high-quality public surveys on AI this year so far. I collected them in this wiki page. Claims are mostly not justified here; the evidence is in that page.
Concern is high, as are pessimism, worry, perceived danger, and support for caution.
Support for regulation is high. Some surveys ask about AI policy proposals; here’s one good set of questions. Most people prefer slowing AI progress to accelerating it; most people support a pause on some AI development but not a ban on AI.[1]
AI is the most important political issue for <1% of people.[2] When asked in context of other specific threats, AI is seen as relatively unimportant; people are more concerned about nuclear war, climate change, and pandemics. Most people agree with the CAIS statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The most salient perceived risks or downsides of AI are
misinformation and deepfakes;
automation; and
surveillance, loss of privacy, and hacking.
Autonomous weapons, losing control of AI, and AI bias are less salient.
Survey results on extinction risk and time to human-level AI are striking but misleading. Attitudes on extinction risk by 2100 are very sensitive to framing (one survey finds median probability 1%, another finds 15%); true attitudes (insofar as they exist) must be much lower than 15% given that several other risks are seen as bigger. Expected time to human-level AI is nominally ~5 years but also seems very sensitive to framing, and 5 years is implausibly short given people’s other attitudes (in open-ended questions people don’t seem to think that AI will be transformative soon, many other issues and threats are seen as more important than AI, most people don’t think their job will be automated soon, etc.). Three surveys asking whether AI is already smarter than humans found 6%, 10%, and 22% agreement;[3] Lizardman’s Constant may be relatively high in surveys on AI.
What comes to mind when people think about AI includes robots, chatbots, text generation tools, and virtual assistants.
Optimism about AI is correlated with being young, male, college-educated, and being a Democrat. Support for regulation is similar between Democrats and Republicans.
Thanks to Rick Korzekwa for comments on a draft.
Slowing:
- Public First: 11% we should accelerate AI development, 33% slow, 39% continue around the same pace.
- Data for Progress: 56% agree with a pro-slowing message, 35% agree with an anti-slowing message.
- Ipsos: “Unchecked development of AI” is a bigger risk (75%), “Government regulation slowing down the development of AI” is a bigger risk (21%).
- AI Policy Institute / YouGov (1, 2):
- − 82% “We should go slowly and deliberately,” 8% “We should speed up development” (after reading brief arguments).
- - “It would be a good thing if AI progress was stopped or significantly slowed”: 62% agree, 26% disagree.
- − 72% “We should slow down the development and deployment of artificial intelligence,” 12% “We should more quickly develop and deploy artificial intelligence.”
Pausing:
- YouGov: A six-month pause on some kinds of AI development: 58% support, 23% oppose.
- Rethink Priorities: Pause on AI research: 51% support, 25% oppose (after reading pro- and anti- messages).
- YouGov: A six-month pause on some kinds of AI development: 30% strongly support, 26% somewhat support, 13% somewhat oppose, 7% strongly oppose.
- AI Policy Institute / YouGov: “A legally enforced pause on advanced artificial intelligence research”: 49% support, 25% oppose.
- YouGov: A six-month pause on some kinds of AI development: 58% strongly support, 22% somewhat support, 11% somewhat oppose, 8% strongly oppose.
Note that the AI Policy Institute is a pro-caution advocacy organization, and Public First, Data for Progress, and Rethink Priorities may have agendas too.
E.g. Most Important Problem (Gallup 2023).
YouGov, Rethink Priorities, and Axios/MC, respectively. Axios/MC found 34% say humans are smarter than AI, 22% AI is smarter than humans, 16% humans and AI are equally smart, and 28% don’t know or have no opinion! But some respondents may have interpreted the Axios/MC question as asking about intelligence in the abstract rather than currently.