Who else thinks we should be aiming for a global moratorium on AGI research at atthispoint? I’m considering ending every comment I make with “AGI research cessandum est”, or “Furthermore, AGI research must be stopped”.
Strong agreement that a global moratorium would be great.
I’m unsure if aiming for a global moratorium is the best thing to aim for rather than a slowing of the race-like behaviour—maybe a relevant similar case is whether to aim directly for the abolition of factory farms or just incremental improvements in welfare standards.
Loudly and publicly calling for a global moratorium should have the effect of slowing down race-like behaviour, even if it is ultimately unsuccessful. We can at least buy some more time, it’s not all or nothing in that sense. And more time can be used to buy yet more time, etc.
Factory farming is an interesting analogy, but the trade-off is different. You can think about whether abolitionism or welfarism has higher EV over the long term, but the stakes aren’t literally the end of the world if factory farming continues to gain power for 5-15 more years (i.e. humanity won’t end up in them).
The linked post is great, thanks for the reminder of it (and good to see it so high up the All Time top LW posts now). Who wants to start the institution lc talks about at the end? Who wants to devote significant resources to working on convincing AGI capabilities researchers to stop?
Isn’t it possible that calling for a complete stop to AI development actually counterfactually speeds up AI development?
The scenario I’m thinking of is something like:
There’s a growing anti-AI movement calling for a complete stop
A lot of people in that movement are ignorant about AI, and about the nature AI risks
It’s therefore easy for pro-AI people to dismiss these concerns, because the reasons given for the stop are in fact wrong/bad
Any other, well-grounded calls for AI slowdown aren’t given the time of day, because they are assumed to be the same as the others
Rather than thoughtful debate, the discourse turns into just attacking the other group
I’m not sure how exactly you’re proposing to advocate for a complete stop, but my worry would be that coming across as alarmist and not being able to give compelling specific reasons that AI poses a serious existential threat would poison the well.
I think it’s great that you’re trying to act seriously on your beliefs, Greg, but I am worried about a dynamic like this.
Really great to see the FLI open letter with some big names attached, so soon after posting the above. Great to see some sense prevailing on this issue at a high level. This is a big step in pushing the global conversation on AGI forward toward a much needed moratorium. I’m much more hopeful than I was yesterday! But there is still a lot of work to be done in getting it to actually happen.
Cap the model size and sophistication somewhere near where it is now? Seems like there’s easily a decade worth of alignment research that could be done on current models (and other theoretical work), which should be done before capabilities are advanced further. A moratorium would help bridge that gap. Demis Hassabis has talked about hitting the pause button as we get closer to the “grey zone”. Now is the time!
A variant on your proposal could be a moratorium on training new large models (e.g. OpenAI would be forbidden from training GPT-5, for example).
That would be more enforceable, because you need lots of compute to train a new model. I don’t know how we would stop an academic thinking up new ideas on how to structure AI models better, and even if we could, it would be hard to disentangle this from alignment research.
It would probably achieve most of what you want. For someone who’s worried about short timelines, reducing the scope for the scaling hypothesis to apply is probably pretty powerful, at least in the short term
Interesting, yes such moratorium on training new LLMs could help. But we also need to make the research morally unacceptable too—I think stigmatisation of AGI capabilities research could go a long way. No one is working on human genetic enhancement or cloning, mainly because of the taboos around them. It’s not like there is a lot of underground research there. (I’m thinking this is needed, because any limits on compute that are imposed could easily be got around).
A limit on compute designed to constrain OpenAI, Anthropic, or Google from training a new model sounds like a very high bar. I don’t understand why that could easily be got around?
Spoofing accounts to combine multiple of them together (as in the Clippy story linked, but I’m imagining humans doing it). The kind of bending of the rules that happens when something is merely regulated but not taboo. It’s not just Microsoft and Google we need to worry about. If the techniques and code are out there (open source models are not far behind cutting edge research), many actors will be trying to run them at scale.
These will still be massive, and massively expensive, training runs though—big operations that will constitute very big strategic decisions only available to the best-resourced actors.
In the post-AutoGPT world, this seems like it will no longer be the case. There is enough fervour by AGI accelerationists that the required resources could be quickly amassed by crowdfunding (cf. crypto projects raising similar amounts to those needed).
Who else thinks we should be aiming for a global moratorium on AGI research at at this point? I’m considering ending every comment I make with “AGI research cessandum est”, or “Furthermore, AGI research must be stopped”.
Strong agreement that a global moratorium would be great.
I’m unsure if aiming for a global moratorium is the best thing to aim for rather than a slowing of the race-like behaviour—maybe a relevant similar case is whether to aim directly for the abolition of factory farms or just incremental improvements in welfare standards.
This post from last year—What an actually pessimistic containment strategy looks like - has some good discussion on the topic of slowing down AGI research.
Loudly and publicly calling for a global moratorium should have the effect of slowing down race-like behaviour, even if it is ultimately unsuccessful. We can at least buy some more time, it’s not all or nothing in that sense. And more time can be used to buy yet more time, etc.
Factory farming is an interesting analogy, but the trade-off is different. You can think about whether abolitionism or welfarism has higher EV over the long term, but the stakes aren’t literally the end of the world if factory farming continues to gain power for 5-15 more years (i.e. humanity won’t end up in them).
The linked post is great, thanks for the reminder of it (and good to see it so high up the All Time top LW posts now). Who wants to start the institution lc talks about at the end? Who wants to devote significant resources to working on convincing AGI capabilities researchers to stop?
Isn’t it possible that calling for a complete stop to AI development actually counterfactually speeds up AI development?
The scenario I’m thinking of is something like:
There’s a growing anti-AI movement calling for a complete stop
A lot of people in that movement are ignorant about AI, and about the nature AI risks
It’s therefore easy for pro-AI people to dismiss these concerns, because the reasons given for the stop are in fact wrong/bad
Any other, well-grounded calls for AI slowdown aren’t given the time of day, because they are assumed to be the same as the others
Rather than thoughtful debate, the discourse turns into just attacking the other group
I’m not sure how exactly you’re proposing to advocate for a complete stop, but my worry would be that coming across as alarmist and not being able to give compelling specific reasons that AI poses a serious existential threat would poison the well.
I think it’s great that you’re trying to act seriously on your beliefs, Greg, but I am worried about a dynamic like this.
Well I’ve articulated what I think are compelling, specific reasons, that AI poses a serious existential threat in my new post: AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now. Hope this can positively impact the public discourse toward informed debate. (And action!)
Yes, thank you for that! I’m probably going to write an object level comment there.
[Edit: I tweeted this]
Really great to see the FLI open letter with some big names attached, so soon after posting the above. Great to see some sense prevailing on this issue at a high level. This is a big step in pushing the global conversation on AGI forward toward a much needed moratorium. I’m much more hopeful than I was yesterday! But there is still a lot of work to be done in getting it to actually happen.
GPT-4 is advanced enough that it will be used to meaningfully speed up the development of GPT-5. If GPT-5 can make GPT-6 on it’s own, it’s game over.
I don’t see how we could implement a moratorium on AGI research that does stop capabilities research but doesn’t stop alignment research?
Cap the model size and sophistication somewhere near where it is now? Seems like there’s easily a decade worth of alignment research that could be done on current models (and other theoretical work), which should be done before capabilities are advanced further. A moratorium would help bridge that gap. Demis Hassabis has talked about hitting the pause button as we get closer to the “grey zone”. Now is the time!
A variant on your proposal could be a moratorium on training new large models (e.g. OpenAI would be forbidden from training GPT-5, for example).
That would be more enforceable, because you need lots of compute to train a new model. I don’t know how we would stop an academic thinking up new ideas on how to structure AI models better, and even if we could, it would be hard to disentangle this from alignment research.
It would probably achieve most of what you want. For someone who’s worried about short timelines, reducing the scope for the scaling hypothesis to apply is probably pretty powerful, at least in the short term
Interesting, yes such moratorium on training new LLMs could help. But we also need to make the research morally unacceptable too—I think stigmatisation of AGI capabilities research could go a long way. No one is working on human genetic enhancement or cloning, mainly because of the taboos around them. It’s not like there is a lot of underground research there. (I’m thinking this is needed, because any limits on compute that are imposed could easily be got around).
A limit on compute designed to constrain OpenAI, Anthropic, or Google from training a new model sounds like a very high bar. I don’t understand why that could easily be got around?
Spoofing accounts to combine multiple of them together (as in the Clippy story linked, but I’m imagining humans doing it). The kind of bending of the rules that happens when something is merely regulated but not taboo. It’s not just Microsoft and Google we need to worry about. If the techniques and code are out there (open source models are not far behind cutting edge research), many actors will be trying to run them at scale.
These will still be massive, and massively expensive, training runs though—big operations that will constitute very big strategic decisions only available to the best-resourced actors.
In the post-AutoGPT world, this seems like it will no longer be the case. There is enough fervour by AGI accelerationists that the required resources could be quickly amassed by crowdfunding (cf. crypto projects raising similar amounts to those needed).
Yes, but they will become increasingly cheaper. A taboo is far stronger than regulation.