Global moratorium on AGI, now (Twitter). Founder of CEEALAR (née the EA Hotel; ceealar.org)
Greg_Colbourn
Thanks. I’m wondering now whether it’s mostly because I’m quoting Shakeel, and there’s been some (mostly unreasonable imo) pushback on his post on X.
Why is this being downvoted!?
“Near Midnight in Suicide City”
Note that the protestors say [1]that they are going to use the “necessity defence” here.
- ^
Well worth watching this documentary by award winning journalist John Sherman
- ^
“to influence the policy of a government by intimidation” might fit, given that they may well end up more powerful than governments if they succeed in their mission to build AGI (and they already have a lot of money, power and influence).
We are fast running out of time to avoid ASI-induced extinction. How long until a model (that is intrinsically unaligned, given no solution yet to alignment) self-exfiltrates and initiates recursive self-improvement? We need a global moratorium on further AGI/ASI development asap. Please do what you can to help with this—talk to people you know, and your representatives. Support groups like PauseAI.
OpenAI’s o1 tried to avoid being shut down, and lied about it, in evals
@jason-1 would be interesting to hear your take on the OP.
EDIT: not sure why the tag isn’t working—getting Jason’s username from the URL on his profile page.
Note that this isn’t actually a hypothetical situation, and the answer to the question is of practical significance given anti AI protestors are currently facing jail for trying to (non-violently) stop OpenAI.
I feel like it should be, under reckless endangerment, or similar; even anti -terror laws under “acts dangerous to human life”. But what is the threshold for judging an activity to be risky or dangerous to human life? How much general and expert consensus does there need to be? (I am not a lawyer.)
Thanks. Yeah, I see a lot of disagreement votes. I was being too hyperbolic for the EA Forum. But I do put ~80% on it (which I guess translates to “pretty much”?), with the remaining ~20% being longer timelines, or dumb luck of one kind or another that we can’t actually influence.
The first of those has a weird resolution criteria of 30% year-on-year world GDP growth (“transformative” more likely means no humans left, after <1 year, to observe GDP imo; I would give the 30+% growth over a whole year scenario little credence because of this). For the second one, I think you need to include “AI Dystopia” as doom as well (sounds like an irreversible catastrophe for the vast majority of people), so 27%. (And again re LLMs, x-risk isn’t from LLMs alone. “System 2” architecture, and embodiment, two other essential ingredients of AGI, are well on track too.)
1/ Unaligned ASI existing at all is equivalent to “doom-causing levels of CO2 over a doom-causing length of time”. We need an immediate pause on AGI development to prevent unaligned ASI. We don’t need an immediate pause on all industry to prevent doom-causing levels of CO2 over a doom-causing length of time.
2/ It’s really not 99% of worlds. That is way too conservative. Metaculus puts 25% chance on weak AGI happening within 1 year and 25% on strong AGI happening within 3 years.
1% (again, conservative[1]) is not a Pascal’s Mugging. 1%(+) catastrophic (not extinction) risk is plausible for climate change, and a lot is being done there (arguably, enough that we are on track to avert catastrophe if action[2] keeps scaling).
flippant militant advocacy for pausing on alarmist slogans that will carry extreme reputation costs in the 99% of worlds where no x-risk from LLMs happen
It’s anything but flippant[3]. And x-risk isn’t from LLMs alone. “System 2” architecture, and embodiment, two other essential ingredients, are well on track too. I’m happy to bear any reputation costs in the event we live through this. It’s unfortunate, but if there is no extinction, then of course people will say we were wrong. But there might well only be no extinction because of our actions![4]
(Sorry I missed this before.) There is strong public support for a Pause already. Arguably all that’s needed is galvanising a critical mass of the public into taking action.
I think a bottleneck for this is finding experienced/plugged in people who are willing to go all out on a Pause.
1% is very conservative (and based on broad surveys of AI researchers, who mostly are building the very technology causing the risk, so are obviously biased against it being high). The point I’m making is that even a 1% chance of death by collateral damage is totally unacceptable coming from any other industry. Supporting a Pause should therefore be a no brainer. (Or to be consistent we should be dismantling ~all regulation of ~all industry.)
The fact that you can’t say more is part of the problem. There needs to be an open global discussion of an AGI Moratorium at the highest levels of policymaking, government, society and industry.
The alarmist rhetoric is kind of intentional. I hope it’s persuasive to at least some people. I’ve been quite frustrated post-GPT-4 over the lack of urgency in EA/LW over AI x-risk (as well as the continued cooperation with AGI accelerationists such as Anthropic). Actually to the point where I think of myself more as an “AI notkilleveryoneist” than an EA these days.