Marcus says:
But a pause gets no additional benefit whereas most other regulation gets additional benefit (like model registry, chip registry, mandatory red teaming, dangerous model capability evals, model weights security standards, etc.)
Matrice says:
Due to this, many in PauseAI are trying to do coalition politics bringing together all opponents of work on AI (neo-Luddites, SJ-oriented AI ethicists, environmentalists, intellectual property lobbyists).
These seem to be hinting at an important crux. On the one hand, I can see that cooperating with people who have other concerns about AI could water down the content of one’s advocacy.
On the other hand, might it be easier to get a broader coalition behind a pause, or some other form of regulation that many others in an AI-concerned coalition would view as a win? At least at a cursory level, many of the alternatives Marcus mentioned sound like things that wouldn’t interest other members of a broader coalition, only people focused on x-risk.
Whether x-risk focused advocates alone can achieve enough policy wins against the power of Big AI (and corporations interested in harnessing it) is unclear to me. If other members of the AI-concerned coalition have significantly more influence than the x-risk group—such that a coalition-based strategy would excessively “risk focusing on policies and AI systems that have little to do with existential risk”—then it is unclear to me whether the x-risk group had enough influence to go it alone either. In that case, would they have been better off with the coalition even if most of the coalition’s work only generically slowed down AI rather than bringing specific x-risk reductions?
My understanding is that most successful political/social movements employ a fairly wide range of strategies—from elite lobbying to grassroots work, from narrow focus on the movement’s core objectives to building coalitions with those who may have common opponents or somewhat associated concerns. Ultimately, elites care about staying in power, and most countries important to AI do have elections. AI advocates are not wrong that imposing a bunch of regulations of any sort will slow down AI, make it harder for AI to save someone like me from cancer 25-35 years down the road, and otherwise impose some real costs. There has to be enough popular support for paying those costs.
So my starting point would be an “all of the above” strategy, rather than giving up on coalition building without first making a concerted effort first. Maybe PauseAI the org, or pause advocacy the idea, isn’t the best way to go about coalition building or to build broad-based public support. But I’m not seeing too much public discussion of better ways?
PauseAI largely seek to emulate existing social movements (like the climate justice movement) but essentially has a cargo cult approach to how social movements work. For a start, there is currently no scientific consensus around AI safety the way there is around climate change, so all actions trying to imitate the climate justice movement are extremely premature. Blockading an AI company’s office talking about existential risk from artificial general intelligence won’t convince any standby passenger, it will just make you look like a doomsayer caricature. It would be comparable to staging an Extinction Rebellion protest in the mid-19th-century.
Due to this, many in PauseAI are trying to do coalition politics bringing together all opponents of work on AI (neo-Luddites, SJ-oriented AI ethicists, environmentalists, intellectual property lobbyists). But the space of possible AI policies is highly dimensional, so any such coalition, done with little understanding of political strategy, will risk focusing on policies and AI systems that have little to do with existential risk (such as image generators), or that even might prove entirely counter-productive (by entrenching further centralization in the hands of the Big Four¹ and discouraging independent research by EA-aligned groups like EleutherAI).
¹: Microsoft/OpenAI, Amazon/Anthropic, Google/DeepMind, Facebook/Meta
Hi Matrice! I find this comment interesting. Considering the public are in favour of slowing down AI, what evidence points you to the below conclusion?
“Blockading an AI company’s office talking about existential risk from artificial general intelligence won’t convince any standby passenger, it will just make you look like a doomsayer caricature.”
Also, what evidence do you have for the below comment? For example, I met the leader of the voice actors association in Australia and we agreed on many topics, including the need for an AISI. In fact, I’d argue you’ve got something important wrong here—talking about existential risk instead of catastrophic risks to policymakers can be counterproductive because there aren’t many useful policies to prevent it (besides pausing).
“ the space of possible AI policies is highly dimensional, so any such coalition, done with little understanding of political strategy, will risk focusing on policies and AI systems that have little to do with existential risk”
“slowing down AI” != “slowing down AI because of x risk”
In addition to what @gw said on the public being in favor of slowing down AI, I’m mostly basing this on reactions to news about PauseAI protests on generic social media websites. The idea that LLMs scaling without further technological breakthrough will for sure lead to superintelligence in the coming decade is controversial by EA standards, fringe by general AI community standard, and resoundly mocked by the general public.
If other stakeholders agree with the existential risk perspective then that is of course great and should be encouraged. To develop further on what I meant (though see also the linked post), I am extremely skeptical that allying with copyright lobbyists is good by any EA/longtermist metric, when ~nobody think art generators pose any existential risk and big AI companies are already negotiating deals with copyright giants (or even the latter creating their own AI divisions as with Adobe Firefly or Disney’s new AI division), while independent EA-aligned research groups like EleutherAI are heavily dependent on the existence of open-source datasets.
There is enough of a scientific consensus that extinction risk from AGI is real and significant. Timelines are arguably much shorter in the case of AGI than climate change, so the movement needs to be ramped up in months-years, not years-decades.
I’d say more like late-20th Century (late 1980s?) in terms of scientific consensus, and mid-21st century (2040s?) in terms of how close global catastrophe is.
Re the broad coalition—the focus is on pausing AI, which will help all anti-AI causes.
Most surveys of AI/ML researchers (with significant selection effects and very high variance) indicate p(doom)s of ~10% (among a variety of different kinds of global risks beyond the traditional AI-go-foom), and (like Ajeya Cotra’s report on AI timelines) a predicted AGI date in the mid-century according to one definition, in next century by another.
Pausing scaling LLMs above a given magnitude will do ~nothing for non-x-risk AI worries. Pausing any subcategory below that (e.g. AI art generators, open-source AI) will do ~nothing (and indeed probably be a net negative) for x-risk AI worries.
10% chance of a 10%[1] chance of extinction happening within 5 years[2] is more than enough to be shutting it all down immediately[3]. It’s actually kind of absurd how tolerant of death risk people are on this relative to those from the pharmaceutical, nuclear or aviation industries.
I outline here why 10% should be used rather than 50%.
Eyeballing the graph here, it looks like at least 10% by 2030.
I think it’s more like 90% [p(doom|AGI)] chance of a 50% chance [p(AGI in 5 years)].
You don’t have to go as far back as the mid-19th-century to find a time before scientific consensus about global warming. You only need to go back to 1990 or so.
Yes, I was thinking of James Hansen’s testimony to the US Senate in 1988 as being equivalent to some of the Senate hearings on AI last year.
Pausing AI development is not a good policy to strive for. Nearly all regulations will slow down AI progress. That’s what regulation does by default. It makes you slow down by having to do other stuff instead of just going forward. But a pause gets no additional benefit whereas most other regulation gets additional benefit (like model registry, chip registry, mandatory red teaming, dangerous model capability evals, model weights security standards, etc.) I don’t know what the ideal policies are but it doesn’t seem like a “pause” with no other asks is the best one.
Pausing AI development for any meaningful amount of time is incredibly unlikely to occur. They will claim they are shifting the overton window but frankly, they mainly seem to do a bunch of protesting where they do stuff like call Sam Altman and Dario Amodei evil.
Pause AI, the organization, does, frankly, juvenile stunts that make EA/AI safety advocates look less serious. Screaming that people are evil is extremely unnuanced, juvenile, and very unlikely to build the necessary bridges to really accomplish things. It makes us look like idiots. I think EAs too often prefer to do research from their laptops as opposed to getting out into the real world and doing things; but doing things doesn’t just mean protesting. It means crafting legislation like SB 1047. It means increasing the supply of mech interp researchers by training them. It means lobbying for safety standards on AI models.
Pause AI’s premise is very “doomy” and only makes sense if you have extremely high AI extinction probabilities and the only way to prevent extinction is an indefinite pause to AI progress. Most people (including those inside of EA) have far less confidence in how any particular AI path will play out and are far less confident in what will/won’t work and what good policies are. The Pause AI movement is very “soldier” mindset and not “scout” mindset.
This is assuming that the alignment/control problems are (a) solvable, and (b) solvable in time. I’m sceptical of (a), let alone (b).
None of the regulations you mention (“model registry, chip registry, mandatory red teaming, dangerous model capability evals, model weights security standards, etc.”) matter without at least a conditional Pause when red lines are crossed (and arguably we’ve already crosses many previously stated red lines, with no consequences in terms of slowing down or pausing).
This and the following point are addressed by other commenters.
See above.
A lot of us have done our scouting (and continue to do so). The time for action is now (or never). Also, I don’t think your p(doom) has to be super high to conclude that the best course of action is pausing.
Hi Marcus, I’m in the mood for a bit of debate, so I’m going to take a stab at responding to all four of your points :)
LMK what you think!
1. This is an argument against a pause policy not the Pause org or a Pause movement. I think discerning funders need to see the differences. Especially if you have thinking on the margin.
2. “Pausing AI development for any meaningful amount of time is incredibly unlikely to occur.” < I think anything other than AGI in less than 10 years is unlikely to occur, but that isn’t a good argument to not work on Safety. Scale and neglectedness matter, as well as tractibility!
”they mainly seem to do a bunch of protesting where they do stuff like call Sam Altman and Dario Amodei evil.”
- Can you show evidence of this please?
3. “Pause AI, the organization, does, frankly, juvenile stunts that make EA/AI safety advocates look less serious.”
- Samesies—can you provide evidence please?
In fact, this whole point seems pretty unjustified. It seems you’re basically arguing that advocacy doesn’t work? Is that correct?
4. “Pause AI’s premise … only makes sense if you have extremely high AI extinction probabilities”
Can you justify this point please? I think it is interesting but it isn’t really explained.
I don’t think there is a need for me to show the relationship here.
2⁄3. https://youtu.be/T-2IM9P6tOs?si=uDiJXEqq8UJ63Hy2 this video came up as the first search result when i searched “pause ai protest” on youtube. In it, the chant things like “open ai sucks! Anthropic sucks! Mistral sucks!” And “Demis Hassabis, reckless! Darío amodei reckless”
I agree that working on safety is a key moral priority. But working on safety looks a lot more like the things I linked to in #3. That’s what doing work looks like.
This seems to be what a typical protest looks like. I’ve seen videos of others. I consider these to be juvenile and unserious and unlikely to build necessary bridged to accomplish outcomes. I’ll let others form their opinions.
The provided source doesn’t show PauseAI affiliated people calling Sam Altman and Dario Amodei evil.
Correct, I potentially misremembered. the actual things they definitely say, at least in this video are “open ai sucks! Anthropic sucks! Mistral sucks!” And “Demis Hassabis, reckless! Darío amodei reckless”
I would submit that I am at the very least directionally correct.
“Demis Hassabis, reckless!” honestly feels to me like a pretty tame protest chant. I did a Google search for “protest” and this was the first result. Signs are things like “one year of genocide funded by UT” which seems both substantially more extreme and less epistemically valid than calling Demis “reckless.”
My sense from your other points is that you just don’t actually want pause AI to accomplish their goals, so it’s kind of over-determined for you, but if I wanted to tell a story about how a grassroots movement successfully got a international pause on AI, various people chanting that the current AI development process is reckless seems pretty fine to me?
Actually, I’m uncertain if pausing AI is a good idea and I wish the Pause AI people had a bit more uncertainty (on both their “p(doom)” and on whether pausing AI is a good policy) as well. I look at people who have 90%+ p(doom) as, at the very least, uncalibrated, the same way I look at the people who are dead certain that AI is going to go positively brilliant and that we should be racing ahead as fast as possible. It’s as if both of them aren’t doing any/enough reading of history. In the case of my tribe
I would submit that this kind of protesting, including/especially the example you posted makes your cause seem dumb/unnuanced/ridiculous to the onlookers who are indifferent/know little.
Last, I was just responding to the prompt “What are some criticisms of PauseAI?”. It’s not exactly the place for a “fair and balanced view” but also, I think it is far more important to critique your own side than the opposite side since you speak the same language as your own team so they will actually listen to you.
What is a reasonable p(doom|ASI) to have to not be concluding that pausing AI is a good idea? Or—what % chance of death are you personally willing to accept for a shot at immortality/utopia? Would it be the same if it was framed in terms of a game of Russian Roulette?
Strong +1 on #3
I can try to answer 3 for Marcus. Imagine that AI policy is a soccer game for professional soccer players. You’ve put in a lot of practice, know the rules, and know how to work well with your teammates. You’re scoring some goals.
Then someone from an interim/pick-up game league who is just learning to play soccer comes along and tried to be on the team, or—in this case is not even aware of the team? If we let them on the team, not only do we look bad to the other team, but since policy is a team sport, they drive our overall impact down because it’s kind of dead weight that we now have to try to guard against for things they do that they think are helpful but are not, depleting energy and resources better spent on getting goals.
I think in terms of this analogy, there are no midfielders, let alone strikers, on the pitch amongst the professionals. No one is even really trying to score goals. Maybe they are going for corners at best. Many are even colluding with the other team and their supporters to make money throwing the match.
That’s just completely false. Sorry I can’t say more.
I agree with many of the things other people have already mentioned. However, I want to add one additional argument against PauseAI, which I believe is quite important and worth emphasizing clearly:
In general, hastening technological progress tends to be a good thing. For example, if a cure for cancer were to arrive in 5 years instead of 15 years, that would be very good. The earlier arrival of the cure would save many lives and prevent a lot of suffering for people who would otherwise endure unnecessary pain or death during those additional 10 years. The difference in timing matters because every year of delay means avoidable harm continues to occur.
I believe this same principle applies to AI, as I expect its main effects will likely be overwhelmingly positive. AI seems likely to accelerate economic growth, accelerate technological progress, and significantly improve health and well-being for billions of people. These outcomes are all very desirable, and I would strongly prefer for them to arrive sooner rather than later. Delaying these benefits unnecessarily means forgoing better lives, better health, and better opportunities for many people in the interim.
Of course, there are exceptions to this principle, as it’s not always the case that hastening technology is beneficial. Sometimes it is indeed wiser to delay the deployment of a new technology if the delay would substantially increase its safety or reduce risks. I’m not dogmatic about hastening technology and I recognize there are legitimate trade-offs here. However, in the case of AI, I am simply not convinced that delaying its development and deployment is justified on current margins.
To make this concrete, let’s say that delaying AI development by 5 years would reduce existential risk by only 0.001 percentage points. I would not support such a trade-off. From the perspective of any moral framework that incorporates even a slight discounting of future consumption and well-being, such a delay would be highly undesirable. There are pragmatic reasons to include time discounting in a moral framework: the future is inherently uncertain, and the farther out we try to forecast, the less predictable and reliable our expectations about the future become. If we can bring about something very good sooner, without significant costs, we should almost always do so rather than being indifferent to when it happens.
However, if the situation were different—if delaying AI by 5 years reduced existential risk by something like 10 percentage points—then I think the case for PauseAI would be much stronger. In such a scenario, I would seriously consider supporting PauseAI and might even advocate for it loudly. That said, I find this kind of large reduction in existential risk from a delay in AI development to be implausible, partly for the reasons others in this thread have already outlined.
This argument is highly dependent on your population ethics. From a longtermist, total positive utilitarian perspective, existential risk is many, many magnitudes worse than delaying progress, as it affect many, many magnitudes more (potential) people.
I think it would require an unreasonably radical interpretation of longtermism to believe, for example, that delaying something as valuable as a cure for cancer by 10 years (or another comparably significant breakthrough) would be justified, let alone overwhelmingly outweighed, because of an extremely slight and speculative anticipated positive impact on existential risk. Similarly, I think the same is true about AI, if indeed pausing the technology would only have a very slight impact on existential risk in expectation.
I’ve already provided a pragmatic argument for incorporating at least a slight amount of time discounting into one’s moral framework, but I want to reemphasize and elaborate on this point for clarity. Even if you are firmly committed to the idea that we should have no pure rate of time preference—meaning you believe future lives and welfare matter just as much as present ones—you should still account for the fact that the future is inherently uncertain. Our ability to predict the future diminishes significantly the farther we look ahead. This uncertainty should generally lead us to favor not delaying the realization of clearly good outcomes unless there is a strong and concrete justification for why the delay would yield substantial benefits.
Longtermism, as I understand it, is simply the idea that the distant future matters a great deal and should be factored into our decision-making. Longtermism does not—and should not—imply that we should essentially ignore enormous, tangible and clear short-term harms just because we anticipate extremely slight and highly speculative long-term gains that might result from a particular course of action.
I recognize that someone who adheres to an extremely strong and rigid version of longtermism might disagree with the position I’m articulating here. Such a person might argue that even a very small and speculative reduction in existential risk justifies delaying massive and clear near-term benefits. However, I generally believe that people should not adopt this kind of extreme strong longtermism. It leads to moral conclusions that are unreasonably detached from the realities of suffering and flourishing in the present and near future, and I think this approach undermines the pragmatic and balanced principles that arguably drew many of us to longtermism in the first place.
I don’t care about population ethics so don’t take this as a good faith argument. But doesn’t astronomical waste imply that saving lives earlier can compete on the same order of magnitude as x risk?
https://nickbostrom.com/papers/astronomical-waste/
I’m curious how many EAs believe this claim literally, and think a 10 million year pause (assuming it’s feasible in the first place) would be justified if it reduced existential risk by a single percentage point. Given the disagree votes to my other comments, it seems a fair number might in fact agree to the literal claim here.
Given my disagreement that we should take these numbers literally, I think it might be worth writing a post about why we should have a pragmatic non-zero discount rate, even from a purely longtermist perspective.
I think fixed discount rates (i.e. a fixed discount rate per year) of any amount seems pretty obviously crazy to me as a model of the future. We use discount rates as a proxy for things like “predictability of the future” and “constraining our plans towards worlds we can influence”, which often makes sense, but I think even very simple thought-experiments produce obviously insane conclusions if you use practically any non-zero fixed discount rate.
See also my comment here: https://forum.effectivealtruism.org/posts/PArvxhBaZJrGAuhZp/report-on-the-desirability-of-science-given-new-biotech?commentId=rsqwSR6h5XPY8EPiT
This is the crux. I think it would reduce existential risk by at least 10% (probably a lot more). And 5 years would just be a start—obviously any Pause should (and in practice will) only be lifted conditionally. I link your AGI timelines are relatively short? And I don’t think your reasons for expecting the default outcome from AGI to be good are sound (as you even allude to yourself).
I do in fact believe that delaying AI by 5 years reduce existential risk by something like 10 percentage points.
Probably this thread isn’t the best place to hash it out, however.
I think this is a reasonable point of disagreement. Though, as you allude to, it is separate from the point I was making.
I do think it is generally very important to distinguish between:
Advocacy for a policy because you think it would have a tiny impact on x-risk, which thereby outweighs all the other side effects of the policy, including potentially massive near-term effects, because reducing x-risk simply outweighs every other ethical priority by many orders of magnitude.
Advocacy for a policy because you think it would have a moderate or large effect on x-risk, and is therefore worth doing because reducing x-risk is an important ethical priority (even if it isn’t, say, one million times more important than every other ethical priority combined).
I’m happy to debate (2) on empirical grounds, and debate (1) on ethical grounds. I think the ethical philosophy behind (1) is quite dubious and resembles the type of logic that is vulnerable to Pascal-mugging. The ethical philosophy behind (2) seems sound, but the empirical basis is often uncertain.
I wrote some criticism in this comment. Mainly, I argue that
(1) A pause could be undesirable. A pause could be net-negative in expectation (with high variance depending on implementation specifics), and that PauseAI should take this concern more seriously.
(2) Fighting doesn’t necessarily bring you closer to winning. PauseAI’s approach *could* be counterproductive even for the aim of achieving a pause, whether or not it’s desirable. From my comment:
What is the ultimate counterfactual here? I’d argue it’s extinction from AGI/ASI in the next 5-10 years with high probability. Better to fight this and lose than just roll over and die.
To be clear—I’m open to more scouting being done concurrently (and open to changing my mind), but imo none of these answers are convincing or reassuring.
This is missing the point of my 2nd argument. It sure sounds better to “fight and lose than roll over and die.”
But I’m saying that “fighting” in the way that PauseAI is “fighting” could make it more likely that you lose.
Not saying “fighting” in general will have this effect. Or that this won’t ever change. Or that I’m confident about this. Just saying: take criticism seriously, acknowledge the uncertainty, don’t rush into action just because you want to do something.
Unrelated to my argument: Not sure what you mean by “high probability” but I’d take a combination of these views are a reasonable prior: XPT.
Who else is pushing for a global Pause/Stop/Moratorium/Non-Proliferation Treaty? Who else is doing that in a way such that PauseAI might be counterfactually harming their efforts? Again, no action on this, or waiting for others to do something “better”, are terrible choices when the consequences of insufficient global action are that we all die in the relatively near future.
Do you think it’s possible for you to be convinced that building ASI is a suicide race, short of an actual AI-mediated global catastrophe? What would it take?
~50%. I think XPT is a terrible prior. Much better to look at the most recent AI Impacts Survey, or the CAIS Statement on AI Risk.
What PauseAI wants to ban or “pause” seems fairly weakly defined and not necessarily relevant to any actual threat level. Their stated goals focus on banning scaling of LLM architecture with known limitations that make ‘takeover’ scenarios unlikely (limited context windows, lack of recursive self-updating independently from training, dependence on massive datacentres to run) and known problems (inscrutability and obvious lack of consistent “alignment”) that are still problems with smaller models if you try to use them for anything sensitive. It’s not clear what “more powerful than GPT4” actually means. Nor is it clear what the level of understanding that will result in un-pausing is or how it will be obtained without any models to study.
Banning LLMs of a certain scale might even have the perverse effect of encouraging companies to optimize performance or reinvent the idea of learning in other ways which are more risky. Or setting back ability to understand extremely powerful LLMs when someone develops them outside a US/EU legislative framework anyway. Or preventing positive AI developments that could save thousands of lives (or from the point of view of a longtermist that believes existential risk is currently nonzero including non-AI factors but might drop to zero in future because of friendly AI, perhaps 10^31 lives!)
Beyond that I think from the perspective of being an effective giving target, PauseAI suffers from the same shortcomings most lobbying outfits do (influencing government and public opinion in an opposing direction to economic growth is hard , it’s unclear what results a marginal dollar donation achieves, and the other side have a lot more dollars and connections to ramp up activity in an equal and opposite direction if they feel their business interests are threatened) so there’s no reason to believe they’re effective even if one agrees their goal is well-defined and correct.
You could also question the motivations of some of the people arguing for AI pauses (hi Elon, we see the LLM you launched shortly after signing the letter saying that LLMs that were ahead of yours were dangerous and should be banned...) although I don’t think this applies to the PauseAI organization specifically.
>PauseAI suffers from the same shortcomings most lobbying outfits do...
I’m confused about this section: Yes, this kind of lobbying is hard, and the impact of a marginal dollar is very unclear. The acc-side also have far more resources (probably; we should be vary of this becoming a Bravery Debate).
This doesn’t feel like a criticism of PauseAI. Limited tractability is easily outweighed by a very high potential impact.
They don’t have any experience and no people with experience driving the ship, where experience and relationships in DC are extremely important. They are meeting with offices, yes, but it’s not clear that they are meeting with the right offices or the right staffers. It’s likely that they are actually not cost-effective because the money could probably be better spent on two highly competent and experienced/plugged in people rather than a bunch of junior people in terms of ROI.
Hi! Interesting comment. To what extent does this also describe most charities spinning out of Ambitious Impacts incubation program?
I’m not familiar with that program, sorry.
Ah, formerly CE. No, I think that formerly CE is not well suited for US Policy-focused spinouts. There aren’t any people on staff that can advise on that well (I’ve been involved in a couple of policy consultation projects for that and it seemed that the advisors just had no grasp regarding what was going on in US policy/advocacy). I think their classic charities are good though!
Another org in the same space, comprised of highly competent and experienced/plugged in people would certainly be welcome, and plausibly could be more effective.
I understand that this topic gets people excited, but commenters are confusing a Pause policy with a Pause movement with the organisation called PauseAI.
Commenters are also confusing ‘should we give PauseAI more money?’ with ‘would it be good if we paused frontier models tomorrow?’
I’ve never seen a topic in EA get a subsection of the community so out of sorts. It makes me extremely suspicious.
I think it is a reasonable assumption that we only should give PauseAI more money (necessary conditions) if (1) we thought that pausing AI is desirable and (2) PauseAI methods are relatively likely to achieve that outcome, conditioned on having the resources to do so. I would argue that many of the comments highlight that both those assumptions are not clear for many of the forum participants. In fact I think it is reasonable to stress disagreement with (2) in particular.
I strongly agree. Almost all of the criticism in this thread seem to start from assumptions about AI that are very far from those held by PauseAI. This thread really needs to be split up to factor that out.
As an example: If you don’t think shrimp can suffer, then that’s a strong argument against the Shrimp Welfare Project. However, that criticism doesn’t belong in the same thread as a discussion about whether the organization is effective, because the two subjects are so different.
Pause AI seems to not be very good at what they are trying to do. For example, this abysmal press release which makes pause AI sound like tinfoil wearing nutjobs, which I already complained about it in the comments here.
I think they’ve been coasting for a while on the novelty of what they’re doing, which helps obscure that only like a dozen or so people are actually showing up to these protests, making them an empty threat. This is unlikely to change as long as the focus of these protests are based on the highly speculative threat of AI x-risk, which people do not viscerally feel as a threat and does not carry authoritative scientific backing compared to something like climate change. People might say they’re concerned about AI on surveys, but they aren’t going to actually hit the streets unless they think it’s meaningfully and imminently going to harm them.
In todays climate, the only way to build a respectably sized protest movement is to put x-risk on the backburner and focus on attacking AI more broadly: there are a lot of people who are pissed at gen-AI in general, like people mad at data plagiarism, job loss and enshittification. They are making some steps towards this, but I think there’s a feeling that doing so would end up aligning them politically with the left and make enemies among AI companies. They should either embrace this, or give up on protesting entirely.
Press release is from Stop AI, which I think is a separate outfit?
It looks like they have one person in common: StopAI team ∩ PauseAI team is Guido Reichstadter. But he’s listed on the former as “protestor” and on the latter as “volunteer”, and I think “separate outfit” is right.