Recently, people on both ends of the de/​accelerating AI spectrum have been making claims that EAs are on the opposite end. So I think it would be helpful to have a poll to get a better idea where EAs stand. I think it’s useful to have some actual descriptions on the different positions, though it’s probably not possible to have these make sense as an ordering for everyone. So you may need to do some averaging to get your location on the spectrum.
Poll on big picture AI de/​acceleration (1 is on left, 21 is on right)
Accelerate ASI everywhere (subsidy, no regulations)
Accelerate AGI everywhere (subsidy, no regulations)
Accelerate ASI in less safe lab in US (subsidy, no regulations)
Accelerate AGI in less safe lab in US (subsidy, no regulations)
Accelerate ASI in safer lab in US (subsidy, no regulations)
Accelerate AGI in safer lab in US (subsidy, no regulations)
Neutral (no regulations, no subsidy)
Responsible scaling policy or similar
SB-1047 (liability, etc)
Pause AI if AI is greatly accelerating the progress on AI (e.g. 10x)
Ban training above a certain size
Ban a certain level of autonomous code writing
Ban AI agents
Pause AI if it causes a major disaster (e.g. like Chernobyl)
Restrict access to AI to few people (like nuclear)
Make AI progress very slow (heavily regulate it)
Pause AI if there is mass unemployment (say >20%)
Pause AI now if it is done globally
Pause AI now unilaterally (one country)
Shut AI down for decades until something changes radically, such as genetic enhancement of intelligence
Never build AGI (Stop AI)
Poll on personal action AI de/​acceleration (1 is on left, 11 is on right)
Ok to be a capabilities employee at a less safe lab (direct impact)
Ok to be a capabilities employee at a safer lab (direct impact)
Ok to be a capabilities employee at a less safe lab for career capital/​donations
Ok to be a capabilities employee at a safer lab for career capital/​donations
Update: Now that voting has closed, I thought I would summarize some of the results. Obviously, this is a tiny subset of EAs, so there could be large sample bias, and there may be some averaging of positions, so nothing is very confident here.
For the big picture, it received 39 votes. 13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/​threshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab. So if I had to summarize the median respondent, it would be strong regulation for AI or pause if a particular event/​threshold is met. There appears to be more evidence for the claim that EA wants AI to be paused/​stopped than for the claim that EA wants AI to be accelerated.
As for the personal actions, there were 27 votes. 67-74% thought it was okay to work in an AI lab in some capacity, depending on how you interpret votes in between defined numbers. About half of these were okay with the work being in capabilities, whereas the other half said it should be in safety. 26% appear to say it is not okay to invest in AI, and about 15% say it is not okay to pay for AI, and about 7% say it is not okay to use AI at all. So the median EA likely thinks it is ok to do AI safety work in the labs. It appears that EAs think that personal actions that accelerate AI are more acceptable than big picture actions to do the same, but this could be a difference due to the personal question being phrased as what is permissible versus the big picture as what would be best to do.
Polls on De/​Accelerating AI
Recently, people on both ends of the de/​accelerating AI spectrum have been making claims that EAs are on the opposite end. So I think it would be helpful to have a poll to get a better idea where EAs stand. I think it’s useful to have some actual descriptions on the different positions, though it’s probably not possible to have these make sense as an ordering for everyone. So you may need to do some averaging to get your location on the spectrum.
Poll on big picture AI de/​acceleration (1 is on left, 21 is on right)
Accelerate ASI everywhere (subsidy, no regulations)
Accelerate AGI everywhere (subsidy, no regulations)
Accelerate ASI in less safe lab in US (subsidy, no regulations)
Accelerate AGI in less safe lab in US (subsidy, no regulations)
Accelerate ASI in safer lab in US (subsidy, no regulations)
Accelerate AGI in safer lab in US (subsidy, no regulations)
Neutral (no regulations, no subsidy)
Responsible scaling policy or similar
SB-1047 (liability, etc)
Pause AI if AI is greatly accelerating the progress on AI (e.g. 10x)
Ban training above a certain size
Ban a certain level of autonomous code writing
Ban AI agents
Pause AI if it causes a major disaster (e.g. like Chernobyl)
Restrict access to AI to few people (like nuclear)
Make AI progress very slow (heavily regulate it)
Pause AI if there is mass unemployment (say >20%)
Pause AI now if it is done globally
Pause AI now unilaterally (one country)
Shut AI down for decades until something changes radically, such as genetic enhancement of intelligence
Never build AGI (Stop AI)
Poll on personal action AI de/​acceleration (1 is on left, 11 is on right)
Ok to be a capabilities employee at a less safe lab (direct impact)
Ok to be a capabilities employee at a safer lab (direct impact)
Ok to be a capabilities employee at a less safe lab for career capital/​donations
Ok to be a capabilities employee at a safer lab for career capital/​donations
Ok to be a safety employee at a less safe lab
Ok to be a safety employee at a safer lab
Ok to invest in AI companies
Ok to pay for AI (but not to invest)
Ok to use free AI (but not to pay for AI, or you need to offset your payment for AI)
Not ok to use AI
Only donating/​working on pausing AI is ok
Update: Now that voting has closed, I thought I would summarize some of the results. Obviously, this is a tiny subset of EAs, so there could be large sample bias, and there may be some averaging of positions, so nothing is very confident here.
For the big picture, it received 39 votes. 13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/​threshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab. So if I had to summarize the median respondent, it would be strong regulation for AI or pause if a particular event/​threshold is met. There appears to be more evidence for the claim that EA wants AI to be paused/​stopped than for the claim that EA wants AI to be accelerated.
As for the personal actions, there were 27 votes. 67-74% thought it was okay to work in an AI lab in some capacity, depending on how you interpret votes in between defined numbers. About half of these were okay with the work being in capabilities, whereas the other half said it should be in safety. 26% appear to say it is not okay to invest in AI, and about 15% say it is not okay to pay for AI, and about 7% say it is not okay to use AI at all. So the median EA likely thinks it is ok to do AI safety work in the labs. It appears that EAs think that personal actions that accelerate AI are more acceptable than big picture actions to do the same, but this could be a difference due to the personal question being phrased as what is permissible versus the big picture as what would be best to do.