I think this is valuable research, and a great write up, so Iâm curating it.
I think this post is so valuable because having accurate models of what the public currently believe seems very important for AI comms and policy work. For instance, I personally found it surprising how few people disbelieve AI being a major risk (only 23% disbelieve it being an extinction level risk), and how few people dismiss it for âSci-fiâ reasons. I have seen fears of âbeing seen as sci-fiâ as a major consideration around AI communications within EA, and so if the public are not (or no longer) put off by this then that would be an important update for people working in AI comms to make.
I also like how clearly the results are presented, with a lot of the key info contained in the first graph.
For instance, I personally found it surprising how few people disbelieve AI being a major risk (only 23% disbelieve it being an extinction level risk)
Just to clarify, we donât find in this study that only 23% of people disbelieve AI is an extinction risk. This study shows that of those who disagreed with the CAIS statement 23% explained this in terms of AI not causing extinction.
So, on the one hand, this is a percentage of a smaller group (only 26% of people disagreed with the CAIS statement in our previous survey) not everyone. On the other hand, it could be that more people also disbelieve AI is an extinction risk, but that wasnât their cited reason for disagreeing with the statement, or maybe they agree with the statement but donât believe AI is an extinction risk.
Fortunately, our previous survey looked at this more directly: we found 13% expressed that there was literally 0 probability of extinction from AI, though around 30% indicated 0-4% (the median was 15%, which is not far off some EA estimates). We can provide more specific figures on request.
I think Monmouthâs question is not exactly about whether the public believe AI to be an existential threat. They asked: âHow worried are you that machines with artificial intelligence could eventually pose a threat to the existence of the human race â very, somewhat, not too, or not at all worried?â The 55% you cite is those who said they were âVery worriedâ or âsomewhat worried.â
Like the earlier YouGov poll, this conflates an affective question (how worried are you) with a cognitive question (what do you believe will happen). Thatâs why we deliberately split these in our own polling, which cited Monmouthâs results, and also asked about explicit probability estimates in our later polling which we cited above.
I think this is valuable research, and a great write up, so Iâm curating it.
I think this post is so valuable because having accurate models of what the public currently believe seems very important for AI comms and policy work. For instance, I personally found it surprising how few people disbelieve AI being a major risk (only 23% disbelieve it being an extinction level risk), and how few people dismiss it for âSci-fiâ reasons. I have seen fears of âbeing seen as sci-fiâ as a major consideration around AI communications within EA, and so if the public are not (or no longer) put off by this then that would be an important update for people working in AI comms to make.
I also like how clearly the results are presented, with a lot of the key info contained in the first graph.
Thanks!
Just to clarify, we donât find in this study that only 23% of people disbelieve AI is an extinction risk. This study shows that of those who disagreed with the CAIS statement 23% explained this in terms of AI not causing extinction.
So, on the one hand, this is a percentage of a smaller group (only 26% of people disagreed with the CAIS statement in our previous survey) not everyone. On the other hand, it could be that more people also disbelieve AI is an extinction risk, but that wasnât their cited reason for disagreeing with the statement, or maybe they agree with the statement but donât believe AI is an extinction risk.
Fortunately, our previous survey looked at this more directly: we found 13% expressed that there was literally 0 probability of extinction from AI, though around 30% indicated 0-4% (the median was 15%, which is not far off some EA estimates). We can provide more specific figures on request.
In 2015, one survey found 44% of the American public would consider AI an existential threat. In February 2023 it was 55%.
I think Monmouthâs question is not exactly about whether the public believe AI to be an existential threat. They asked:
âHow worried are you that machines with artificial intelligence could eventually pose a
threat to the existence of the human race â very, somewhat, not too, or not at all worried?â The 55% you cite is those who said they were âVery worriedâ or âsomewhat worried.â
Like the earlier YouGov poll, this conflates an affective question (how worried are you) with a cognitive question (what do you believe will happen). Thatâs why we deliberately split these in our own polling, which cited Monmouthâs results, and also asked about explicit probability estimates in our later polling which we cited above.