Thanks for sharing. This is a very insightful piece. Im surprised that folks were more concerned about larger scale abstract risks compared to more well defined and smaller scale risks (like bias). I’m also surprised that they are this pro regulation (including a Sox months pause). Given this, I feel a bit confused that they mostly support the development of AI and I wonder what had most shaped their view.
Overall, I mildly worry that the survey led people to express more concern than they feel. Because this seems surprisingly close to my perception of the views of many existential risks “experts”. What do you think?
Would love to see this for other countries too. How feasible do you think that would be?
I don’t have strong hypotheses why people ‘mostly support’ something they also want treated with such care. My weak ones would be ‘people like technology but when asked about what the government should do, want them to keep them safe (remove biggest threats).’ For example, Australians support getting nuclear submarines but also support the ban on nuclear weapons. I don’t necessarily see this as a contradiction—”keep me safe” priorities would lead to both. I don’t know if our answers would have changed if we made the trade-offs more salient (e.g., here’s what you’d lose if we took this policy action prioritising risks). Interested in suggestions for how we could do that better.
It’d be easy for us to run in other countries. We’ll put the data and code online soon. If someone’s keen to run the ‘get it in the hands of people who want to use it’ piece, we could also do the ‘run the survey and make a technical report one’. It’s all in R so the marginal cost of another country is low. We’d need access to census data to do the statistical adjustment to estimate population agreement (but that should be easy to see if possible).
Thanks. Hmm. The vibe I’m getting from these answers is P(extinction)>5% (which is higher than the XST you linked).
Ohh that’s great. We’re starting to do significant work in India and would be interested in knowing similar things there. Any idea of what it’d cost to run there?
I’ll look into it. The census data part seems okay. Collecting a representative sample would be harder (e.g., literacy rates are lower, so I don’t know how to estimate responses for those groups).
That makes sense. We might do some more strategic outreach later this year where a report like this would be relevant but for now i don’t have a clear use case in mind for this so probably better to wait. Approximately how much time would you need to run this?
Thanks for sharing. This is a very insightful piece. Im surprised that folks were more concerned about larger scale abstract risks compared to more well defined and smaller scale risks (like bias). I’m also surprised that they are this pro regulation (including a Sox months pause). Given this, I feel a bit confused that they mostly support the development of AI and I wonder what had most shaped their view.
Overall, I mildly worry that the survey led people to express more concern than they feel. Because this seems surprisingly close to my perception of the views of many existential risks “experts”. What do you think?
Would love to see this for other countries too. How feasible do you think that would be?
Thanks Seb. I’m not that surprised—public surveys in the Existential Risk Persuasion tournament were pretty high (5% for AI). I don’t think most people are good at calibrating probabilities between 0.001% and 10% (myself included).
I don’t have strong hypotheses why people ‘mostly support’ something they also want treated with such care. My weak ones would be ‘people like technology but when asked about what the government should do, want them to keep them safe (remove biggest threats).’ For example, Australians support getting nuclear submarines but also support the ban on nuclear weapons. I don’t necessarily see this as a contradiction—”keep me safe” priorities would lead to both. I don’t know if our answers would have changed if we made the trade-offs more salient (e.g., here’s what you’d lose if we took this policy action prioritising risks). Interested in suggestions for how we could do that better.
It’d be easy for us to run in other countries. We’ll put the data and code online soon. If someone’s keen to run the ‘get it in the hands of people who want to use it’ piece, we could also do the ‘run the survey and make a technical report one’. It’s all in R so the marginal cost of another country is low. We’d need access to census data to do the statistical adjustment to estimate population agreement (but that should be easy to see if possible).
Thanks. Hmm. The vibe I’m getting from these answers is P(extinction)>5% (which is higher than the XST you linked).
Ohh that’s great. We’re starting to do significant work in India and would be interested in knowing similar things there. Any idea of what it’d cost to run there?
I’ll look into it. The census data part seems okay. Collecting a representative sample would be harder (e.g., literacy rates are lower, so I don’t know how to estimate responses for those groups).
That makes sense. We might do some more strategic outreach later this year where a report like this would be relevant but for now i don’t have a clear use case in mind for this so probably better to wait. Approximately how much time would you need to run this?
Our project took approximately 2 weeks FTE for 3 people (most was parallelisable). Probably the best reference class.
Very helpful. I’ll keep it in mind if the use case/need emerges in the future.