What PauseAI wants to ban or “pause” seems fairly weakly defined and not necessarily relevant to any actual threat level. Their stated goals focus on banning scaling of LLM architecture with known limitations that make ‘takeover’ scenarios unlikely (limited context windows, lack of recursive self-updating independently from training, dependence on massive datacentres to run) and known problems (inscrutability and obvious lack of consistent “alignment”) that are still problems with smaller models if you try to use them for anything sensitive. It’s not clear what “more powerful than GPT4” actually means. Nor is it clear what the level of understanding that will result in un-pausing is or how it will be obtained without any models to study.
Banning LLMs of a certain scale might even have the perverse effect of encouraging companies to optimize performance or reinvent the idea of learning in other ways which are more risky. Or setting back ability to understand extremely powerful LLMs when someone develops them outside a US/EU legislative framework anyway. Or preventing positive AI developments that could save thousands of lives (or from the point of view of a longtermist that believes existential risk is currently nonzero including non-AI factors but might drop to zero in future because of friendly AI, perhaps 10^31 lives!)
Beyond that I think from the perspective of being an effective giving target, PauseAI suffers from the same shortcomings most lobbying outfits do (influencing government and public opinion in an opposing direction to economic growth is hard , it’s unclear what results a marginal dollar donation achieves, and the other side have a lot more dollars and connections to ramp up activity in an equal and opposite direction if they feel their business interests are threatened) so there’s no reason to believe they’re effective even if one agrees their goal is well-defined and correct.
You could also question the motivations of some of the people arguing for AI pauses (hi Elon, we see the LLM you launched shortly after signing the letter saying that LLMs that were ahead of yours were dangerous and should be banned...) although I don’t think this applies to the PauseAI organization specifically.
>PauseAI suffers from the same shortcomings most lobbying outfits do...
I’m confused about this section: Yes, this kind of lobbying is hard, and the impact of a marginal dollar is very unclear. The acc-side also have far more resources (probably; we should be vary of this becoming a Bravery Debate).
This doesn’t feel like a criticism of PauseAI. Limited tractability is easily outweighed by a very high potential impact.
What PauseAI wants to ban or “pause” seems fairly weakly defined and not necessarily relevant to any actual threat level. Their stated goals focus on banning scaling of LLM architecture with known limitations that make ‘takeover’ scenarios unlikely (limited context windows, lack of recursive self-updating independently from training, dependence on massive datacentres to run) and known problems (inscrutability and obvious lack of consistent “alignment”) that are still problems with smaller models if you try to use them for anything sensitive. It’s not clear what “more powerful than GPT4” actually means. Nor is it clear what the level of understanding that will result in un-pausing is or how it will be obtained without any models to study.
Banning LLMs of a certain scale might even have the perverse effect of encouraging companies to optimize performance or reinvent the idea of learning in other ways which are more risky. Or setting back ability to understand extremely powerful LLMs when someone develops them outside a US/EU legislative framework anyway. Or preventing positive AI developments that could save thousands of lives (or from the point of view of a longtermist that believes existential risk is currently nonzero including non-AI factors but might drop to zero in future because of friendly AI, perhaps 10^31 lives!)
Beyond that I think from the perspective of being an effective giving target, PauseAI suffers from the same shortcomings most lobbying outfits do (influencing government and public opinion in an opposing direction to economic growth is hard , it’s unclear what results a marginal dollar donation achieves, and the other side have a lot more dollars and connections to ramp up activity in an equal and opposite direction if they feel their business interests are threatened) so there’s no reason to believe they’re effective even if one agrees their goal is well-defined and correct.
You could also question the motivations of some of the people arguing for AI pauses (hi Elon, we see the LLM you launched shortly after signing the letter saying that LLMs that were ahead of yours were dangerous and should be banned...) although I don’t think this applies to the PauseAI organization specifically.
>PauseAI suffers from the same shortcomings most lobbying outfits do...
I’m confused about this section: Yes, this kind of lobbying is hard, and the impact of a marginal dollar is very unclear. The acc-side also have far more resources (probably; we should be vary of this becoming a Bravery Debate).
This doesn’t feel like a criticism of PauseAI. Limited tractability is easily outweighed by a very high potential impact.