I haven’t been able to find details about the survey’s methodology (see here for info about YouGov’s general methodology) or the credibility of YouGov (EDIT: Several people I trust have told me that YouGov is credible, well-respected, and widely quoted for US polls).
Agreed that YouGov are a reputable pollster. That said, I think the wording of their concern question has some unfortunate features which likely bias the results (which is common even among both reputable pollsters).
Asking “How concerned, if at all, are you about the possibility that AI will cause the end of the human race on Earth” is, on its face, ambiguous between (i) asking how concerned you are given your estimate of how probable the outcome is and (ii) asking how concerned you are about the possible outcome. It doesn’t distinguish how probable respondents think the outcome is from how concerning they think the outcome would be were it to occur. As such, respondents may respond that they are “very concerned” about “the possibility that AI will cause the end of the human race on Earth” to indicate that they believe this possibility (AI causes the end of the human race) is very bad. This is particularly so given that respondents are typically not responding to questions completely literalistically, but (like in normal human communication) interpreting them pragmatically based on what they think the questioners are likely to be interested in asking, and based on what they themselves want to signal. I would predict that this is slightly elevating reported levels of concern.
Agreed that YouGov are a reputable pollster. That said, I think the wording of their concern question has some unfortunate features which likely bias the results (which is common even among both reputable pollsters).
Asking “How concerned, if at all, are you about the possibility that AI will cause the end of the human race on Earth” is, on its face, ambiguous between (i) asking how concerned you are given your estimate of how probable the outcome is and (ii) asking how concerned you are about the possible outcome. It doesn’t distinguish how probable respondents think the outcome is from how concerning they think the outcome would be were it to occur. As such, respondents may respond that they are “very concerned” about “the possibility that AI will cause the end of the human race on Earth” to indicate that they believe this possibility (AI causes the end of the human race) is very bad. This is particularly so given that respondents are typically not responding to questions completely literalistically, but (like in normal human communication) interpreting them pragmatically based on what they think the questioners are likely to be interested in asking, and based on what they themselves want to signal. I would predict that this is slightly elevating reported levels of concern.