David—thanks much for sharing the link to this Monmouth University survey. I urge everybody to have a look at it here (the same link you shared).
The survey looks pretty good methodologically: a probability-based national random sample of 805 U.S. adults, run by a reputable academic polling institute.
Two key results are worth highlighting, IMHO:
First, in response to the question “How worried are you that machines with artificial intelligence could eventually pose a threat to the existence of the human race – very, somewhat, not too, or not at all worried?”, 55% of people (as you mentioned) were ‘very worried’ or ‘somewhat worried’, and only 16% were ‘not at all worried’.
Second, in response to the question “If computer scientists really were able to develop computers with artificial intelligence, what effect do you think this would have on society as a whole? Would it do more good than harm, more harm than good, or about equal amounts of harm and good?”, 41% predicted more harm than good, and only 9% predicted more good than harm.
Long story short, the American public is already very concerned about AI X risk, and very dubious that AI will bring more benefits than costs.
This contrasts markedly from the AI industry rhetoric/PR/propaganda that says everybody’s excited about the wonderful future that AI will bring, and embraces that future with open arms.
The results of this study are interesting for sure. Examining them more carefully makes me wonder if there is a significant priming effect in play in both the 2015 and 2023 polls. This would not explain the 11 percent increase in participants worried about AI eventually posing a threat to the existence of the human race, though it potentially could have contributed, since there were some questions added to the 2023 poll that weren’t in the 2015 one.
I was surprised that in 2023, only 60% of participants “Had heard about A.I. products – such as ChatGPT – that can have conversations with you and write entire essays based on just a few prompts from humans?” (Question 26)
Looks like they used a telephone survey. I would imagine getting 805 random participants willing to answer a call from a (presumably) unrecognized number, much less partake in a 39 question phone survey would be rough these days. I don’t see any mention of incentivizing participation, though.
David—thanks much for sharing the link to this Monmouth University survey. I urge everybody to have a look at it here (the same link you shared).
The survey looks pretty good methodologically: a probability-based national random sample of 805 U.S. adults, run by a reputable academic polling institute.
Two key results are worth highlighting, IMHO:
First, in response to the question “How worried are you that machines with artificial intelligence could eventually pose a threat to the existence of the human race – very, somewhat, not too, or not at all worried?”, 55% of people (as you mentioned) were ‘very worried’ or ‘somewhat worried’, and only 16% were ‘not at all worried’.
Second, in response to the question “If computer scientists really were able to develop computers with artificial intelligence, what effect do you think this would have on society as a whole? Would it do more good than harm, more harm than good, or about equal amounts of harm and good?”, 41% predicted more harm than good, and only 9% predicted more good than harm.
Long story short, the American public is already very concerned about AI X risk, and very dubious that AI will bring more benefits than costs.
This contrasts markedly from the AI industry rhetoric/PR/propaganda that says everybody’s excited about the wonderful future that AI will bring, and embraces that future with open arms.
Thanks for sharing @Geoffrey Miller and @DavidNash .
The results of this study are interesting for sure. Examining them more carefully makes me wonder if there is a significant priming effect in play in both the 2015 and 2023 polls. This would not explain the 11 percent increase in participants worried about AI eventually posing a threat to the existence of the human race, though it potentially could have contributed, since there were some questions added to the 2023 poll that weren’t in the 2015 one.
I was surprised that in 2023, only 60% of participants “Had heard about A.I. products – such as ChatGPT – that can have conversations with you and write entire essays based on just a few prompts from humans?” (Question 26)
Looks like they used a telephone survey. I would imagine getting 805 random participants willing to answer a call from a (presumably) unrecognized number, much less partake in a 39 question phone survey would be rough these days. I don’t see any mention of incentivizing participation, though.
Fascinating!