Largely agree, but results like this (1) indicate that if AI does become more salient the public will be super concerned about risks and (2) might help nudge policy elites to be more interested in regulating AI. (And it’s not like there’s some other “real belief” that the survey fails to elicit—most people just don’t have ‘real beliefs’ on most topics.)
Well, maybe to both parts; it’s a good sign, but a weak one. Also concerns about response bias, etc., especially since YouGov doesn’t specialize in polling these types of questions and there’s no “ground truth” here to compare to.
Largely agree, but results like this (1) indicate that if AI does become more salient the public will be super concerned about risks and (2) might help nudge policy elites to be more interested in regulating AI. (And it’s not like there’s some other “real belief” that the survey fails to elicit—most people just don’t have ‘real beliefs’ on most topics.)
Well, maybe to both parts; it’s a good sign, but a weak one. Also concerns about response bias, etc., especially since YouGov doesn’t specialize in polling these types of questions and there’s no “ground truth” here to compare to.