I think this is a good and useful post in many ways, in particular laying out a partial taxonomy of differing pause proposals and gesturing at their grounding and assumptions. What follows is a mildly heated response I had a few days ago, whose heatedness I don’t necessarily endorse but whose content seems important to me.
Sadly this letter is full of thoughtless remarks about China and the US/West. Scott, you should know better. Words have power. I recently wrote an admonishment to CAIS for something similar.
The biggest disadvantage of pausing for a long time is that it gives bad actors (eg China) a chance to catch up.
There are literal misanthropic ‘effective accelerationists’ in San Francisco, some of whose stated purpose is to train/develop AI which can surpass and replace humanity. There’s Facebook/Meta, whose leaders and executives have been publicly pooh-poohing discussion of AI-related risks as pseudoscience for years, and whose actual motto is ‘move fast and break things’. There’s OpenAI, which with great trumpeting announces its ‘Superalignment’ strategy without apparently pausing to think, ‘But what if we can’t align AGI in 5 years?‘. We don’t need to invoke bogeyman ‘China’ to make this sort of point. Note also that the CCP (along with EU and UK gov) has so far been more active in AI restraint and regulation than, say, the US government, or orgs like Facebook/Meta.
Suppose the West is right on the verge of creating dangerous AI, and China is two years away. It seems like the right length of pause is 1.9999 years, so that we get the benefit of maximum extra alignment research and social prep time, but the West still beats China.
Now, this was in the context of paraphrases of others’ positions on a pause in AI development, so it’s at least slightly mention-flavoured (as opposed to use). But as far as I can tell, the precise framing here has been introduced in Scott’s retelling.
Whoever introduced this formulation, this is bonkers in at least two ways. First, who is ‘the West’ and who is ‘China’? This hypothetical frames us as hivemind creatures in a two-player strategy game with a single lever. Reality is a lot more porous than that, in ways which matter (strategically and in terms of outcomes). I shouldn’t have to point this out, so this is a little bewildering to read. Let me reiterate: governments are not currently pursuing advanced AI development, only companies. The companies are somewhat international, mainly headquartered in the US and UK but also to some extent China and EU, and the governments have thus far been unwitting passengers with respect to the outcomes. Of course, these things can change.
Second, actually think about the hypothetical where ‘we’[1] are ‘on the verge of creating dangerous AI’. For sufficient ‘dangerous’, the only winning option for humanity is to take the steps we can to prevent, or at least delay[2], that thing coming into being. This includes advocacy, diplomacy, ‘aggressive diplomacy’ and so on. I put forward that the right length of pause then is ‘at least as long as it takes to make the thing not dangerous’. You don’t win by capturing the dubious accolade of nominally belonging to the bloc which directly destroys everything! To be clear, I think Scott and I agree that ‘dangerous AI’ here is shorthand for, ‘AI that could defeat/destroy/disempower all humans in something comparable to an extinction event’. We already have weak AI which is dangerous to lesser levels. Of course, if ‘dangerous’ is more qualified, then we can talk about the tradeoffs of risking destroying everything vs ‘us’ winning a supposed race with ‘them’.
I’m increasingly running with the hypothesis that many anglophones are mind-killed on the inevitability of contemporary great power conflict in a way which I think wasn’t the case even, say, 5 years ago. Maybe this is how thinking people felt in the run up to WWI, I don’t know.
I wonder if a crux here is some kind of general factor of trustingness toward companies vs toward governments—I think extremising this factor would change the way I talk and think about such matters. I notice that a lot of American libertarians seem to have a warm glow around ‘company/enterprise’ that they don’t have around ‘government/regulation’.
[ In my post about this I outline some other possible cruxes and I’d love to hear takes on these ]
Separately, I’ve got increasingly close to the frontier of AI research and AI safety research, and the challenge of ensuring these systems are safe remains very daunting. I think some policy/people-minded discussions are missing this rather crucial observation. If you expect it to be easy (and expect others to expect that) to control AGI, I can see more why people would frame things around power struggles and racing. For this reason, I consider it worthwhile repeating: we don’t know how to ensure these systems will be safe, and there are some good reasons to expect that they won’t be by default.
I repeat that the post as a whole is doing a service and I’m excited to see more contributions to the conversation around pause and differential development and so on.
Who, me? You? No! Some development team at DeepMind or OpenAI, presumably, or one of the current small gaggle of other contenders, or a yet-to-be-founded lab.
I think this is a good and useful post in many ways, in particular laying out a partial taxonomy of differing pause proposals and gesturing at their grounding and assumptions. What follows is a mildly heated response I had a few days ago, whose heatedness I don’t necessarily endorse but whose content seems important to me.
Sadly this letter is full of thoughtless remarks about China and the US/West. Scott, you should know better. Words have power. I recently wrote an admonishment to CAIS for something similar.
There are literal misanthropic ‘effective accelerationists’ in San Francisco, some of whose stated purpose is to train/develop AI which can surpass and replace humanity. There’s Facebook/Meta, whose leaders and executives have been publicly pooh-poohing discussion of AI-related risks as pseudoscience for years, and whose actual motto is ‘move fast and break things’. There’s OpenAI, which with great trumpeting announces its ‘Superalignment’ strategy without apparently pausing to think, ‘But what if we can’t align AGI in 5 years?‘. We don’t need to invoke bogeyman ‘China’ to make this sort of point. Note also that the CCP (along with EU and UK gov) has so far been more active in AI restraint and regulation than, say, the US government, or orgs like Facebook/Meta.
Now, this was in the context of paraphrases of others’ positions on a pause in AI development, so it’s at least slightly mention-flavoured (as opposed to use). But as far as I can tell, the precise framing here has been introduced in Scott’s retelling.
Whoever introduced this formulation, this is bonkers in at least two ways. First, who is ‘the West’ and who is ‘China’? This hypothetical frames us as hivemind creatures in a two-player strategy game with a single lever. Reality is a lot more porous than that, in ways which matter (strategically and in terms of outcomes). I shouldn’t have to point this out, so this is a little bewildering to read. Let me reiterate: governments are not currently pursuing advanced AI development, only companies. The companies are somewhat international, mainly headquartered in the US and UK but also to some extent China and EU, and the governments have thus far been unwitting passengers with respect to the outcomes. Of course, these things can change.
Second, actually think about the hypothetical where ‘we’[1] are ‘on the verge of creating dangerous AI’. For sufficient ‘dangerous’, the only winning option for humanity is to take the steps we can to prevent, or at least delay[2], that thing coming into being. This includes advocacy, diplomacy, ‘aggressive diplomacy’ and so on. I put forward that the right length of pause then is ‘at least as long as it takes to make the thing not dangerous’. You don’t win by capturing the dubious accolade of nominally belonging to the bloc which directly destroys everything! To be clear, I think Scott and I agree that ‘dangerous AI’ here is shorthand for, ‘AI that could defeat/destroy/disempower all humans in something comparable to an extinction event’. We already have weak AI which is dangerous to lesser levels. Of course, if ‘dangerous’ is more qualified, then we can talk about the tradeoffs of risking destroying everything vs ‘us’ winning a supposed race with ‘them’.
I’m increasingly running with the hypothesis that many anglophones are mind-killed on the inevitability of contemporary great power conflict in a way which I think wasn’t the case even, say, 5 years ago. Maybe this is how thinking people felt in the run up to WWI, I don’t know.
I wonder if a crux here is some kind of general factor of trustingness toward companies vs toward governments—I think extremising this factor would change the way I talk and think about such matters. I notice that a lot of American libertarians seem to have a warm glow around ‘company/enterprise’ that they don’t have around ‘government/regulation’.
[ In my post about this I outline some other possible cruxes and I’d love to hear takes on these ]
Separately, I’ve got increasingly close to the frontier of AI research and AI safety research, and the challenge of ensuring these systems are safe remains very daunting. I think some policy/people-minded discussions are missing this rather crucial observation. If you expect it to be easy (and expect others to expect that) to control AGI, I can see more why people would frame things around power struggles and racing. For this reason, I consider it worthwhile repeating: we don’t know how to ensure these systems will be safe, and there are some good reasons to expect that they won’t be by default.
I repeat that the post as a whole is doing a service and I’m excited to see more contributions to the conversation around pause and differential development and so on.
Who, me? You? No! Some development team at DeepMind or OpenAI, presumably, or one of the current small gaggle of other contenders, or a yet-to-be-founded lab.
If it comes to it, extinction an hour later is better than an hour sooner.