I’m skeptical about the value of slowing down leading AI labs primarily because it likely reduces the influence of the values of EAs in shaping the deployment of AGI/ASI. Anthropic is the best example of a lab with people who share these values, but I’d imagine that EAs also have more overlap with the staff at OpenAI and DeepMind than actors who would catch up because of a slowdown. And for what it’s worth, the labs were founded with the stated goal of benefiting humanity before it became far more apparent that current paradigms have a high chance of resulting in AGI with the potential of granting profit/power to their human operators and investors.
As others have noted, people and powerful groups outside of this community and surrounding communities don’t seem to be interested in consequentialist, impartial, altruistic priorities like creating a positive long-term future for humanity, but are instead more self-interested. Personally I’m more downside-focused, but I think it’s relevant to most EAs that other parties wouldn’t be as willing to dedicate a large amount of resources towards creating large amounts of happiness for others, and because of that, the reduction of influence of the values of EAs will result in a considerable loss of expected future value.
EDIT (2024-05-19): When I wrote this I had in mind Anthropic > OpenAI > DeepMind but Anthropic > DeepMind > OpenAI seems more sensible now. Unclear where to insert various governments/militaries/politicians/CEOs into this ranking.
I’m skeptical about the value of slowing down leading AI labs primarily because it likely reduces the influence of the values of EAs in shaping the deployment of AGI/ASI. Anthropic is the best example of a lab with people who share these values, but I’d imagine that EAs also have more overlap with the staff at OpenAI and DeepMind than actors who would catch up because of a slowdown. And for what it’s worth, the labs were founded with the stated goal of benefiting humanity before it became far more apparent that current paradigms have a high chance of resulting in AGI with the potential of granting profit/power to their human operators and investors.
As others have noted, people and powerful groups outside of this community and surrounding communities don’t seem to be interested in consequentialist, impartial, altruistic priorities like creating a positive long-term future for humanity, but are instead more self-interested. Personally I’m more downside-focused, but I think it’s relevant to most EAs that other parties wouldn’t be as willing to dedicate a large amount of resources towards creating large amounts of happiness for others, and because of that, the reduction of influence of the values of EAs will result in a considerable loss of expected future value.
EDIT (2024-05-19): When I wrote this I had in mind Anthropic > OpenAI > DeepMind but Anthropic > DeepMind > OpenAI seems more sensible now. Unclear where to insert various governments/militaries/politicians/CEOs into this ranking.
Leopold Aschenbrenner makes some good points for “Government > Private sector” in the latest Dwarkesh podcast.