Creating superintelligent artificial agents without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (n.b. we already have AGI).
yanni kyriacos
What is your AI Capabilities Red Line Personal Statement? It should read something like “when AI can do X in Y way, then I think we should be extremely worried / advocate for a Pause*”.
I think it would be valuable if people started doing this; we can’t feel when we’re on an exponential, so its likely we will have powerful AI creep up on us.
@Greg_Colbourn just posted this and I have an intuition that people are going to read it and say “while it can do Y it still can’t do X”
*in the case you think a Pause is ever optimal.
This seems bad, but I’m not technical and therefore feel the need for other people to validate or invalidate this feeling of badness.
But maybe it is wrong that I feel the need for this validation, and that the ignoring of the obvious warning signs in lieu of The Adult In The Room telling me everything is ok, at scale, is the thing that kills us.
Agree. If you think career switches take 18 months but timelines are 72 months then direct work is more important?
Possibly a high effort low reward suggestion for the forum team but I’d love (with a single click) to be able to listen to forum posts as a podcast via google’s notebookLM. I think this could increase my content consumption of long form posts by about 2x.
Go Greg!! Strong upvote because I think more Pause advocacy on the margin is probably the most neglected Safety intervention, disagreed X thingy because I don’t think its “the only way we’re going to survive” partly because dumb luck exists and we actually need more of everything (waves hand over all possible Governance and Technical safety interventions).
I understand that this topic gets people excited, but commenters are confusing a Pause policy with a Pause movement with the organisation called PauseAI.
Commenters are also confusing ‘should we give PauseAI more money?’ with ‘would it be good if we paused frontier models tomorrow?’
I’ve never seen a topic in EA get a subsection of the community so out of sorts. It makes me extremely suspicious.
Hi Marcus, I’m in the mood for a bit of debate, so I’m going to take a stab at responding to all four of your points :)
LMK what you think!
1. This is an argument against a pause policy not the Pause org or a Pause movement. I think discerning funders need to see the differences. Especially if you have thinking on the margin.
2. “Pausing AI development for any meaningful amount of time is incredibly unlikely to occur.” < I think anything other than AGI in less than 10 years is unlikely to occur, but that isn’t a good argument to not work on Safety. Scale and neglectedness matter, as well as tractibility!
”they mainly seem to do a bunch of protesting where they do stuff like call Sam Altman and Dario Amodei evil.”
- Can you show evidence of this please?
3. “Pause AI, the organization, does, frankly, juvenile stunts that make EA/AI safety advocates look less serious.”
- Samesies—can you provide evidence please?
In fact, this whole point seems pretty unjustified. It seems you’re basically arguing that advocacy doesn’t work? Is that correct?
4. “Pause AI’s premise … only makes sense if you have extremely high AI extinction probabilities”
Can you justify this point please? I think it is interesting but it isn’t really explained.
Hi Matrice! I find this comment interesting. Considering the public are in favour of slowing down AI, what evidence points you to the below conclusion?
“Blockading an AI company’s office talking about existential risk from artificial general intelligence won’t convince any standby passenger, it will just make you look like a doomsayer caricature.”
Also, what evidence do you have for the below comment? For example, I met the leader of the voice actors association in Australia and we agreed on many topics, including the need for an AISI. In fact, I’d argue you’ve got something important wrong here—talking about existential risk instead of catastrophic risks to policymakers can be counterproductive because there aren’t many useful policies to prevent it (besides pausing).
“ the space of possible AI policies is highly dimensional, so any such coalition, done with little understanding of political strategy, will risk focusing on policies and AI systems that have little to do with existential risk”
Hi! Interesting comment. To what extent does this also describe most charities spinning out of Ambitious Impacts incubation program?
What a cop-out! Politics is a mind-killer if you’re incapable of observing your mind.
Ten months ago I met Australia’s Assistant Defence Minister about AI Safety because I sent him one email asking for a meeting. I wrote about that here. In total I sent 21 emails to Politicians and had 4 meetings. AFAICT there is still no organisation with significant funding that does this as their primary activity. AI Safety advocacy is IMO still extremely low hanging fruit. My best theory is EAs don’t want to do it / fund it because EAs are drawn to spreadsheets and google docs (it isn’t their comparative advantage). Hammers like nails etc.
Hi Ben! You might be interested to know I literally had a meeting with the Assistant Defence Minister in Australia about 10 months ago off the back of one email. I wrote about it here. AI Safety advocacy is IMO still low extremely hanging fruit. My best theory is EAs don’t want to do it because EAs are drawn to spreadsheets etc (it isn’t their comparative advantage).
Hey mate! I use the light for about 4 hours a day. Which means I’ll get 6.84 years from it.
In case I wasn’t clear, I was suggesting that tech will have progressed far enough in ~ 6.84 years that worrying about a light in a ceiling fan doesn’t make sense.
I went to buy a ceiling fan recently. The salesperson said I might not want a particular model because it had a light with only 10,000 hours in it, and they’ve decommissioned replacements. I told him I wasn’t worried 😭
Flaming hot take: I wonder if some EAs suffer from Scope Oversensitivity—essentially the inverse of the identifiable victim effect. Take the animal welfare vs global health debate: are we sometimes biased by the sheer magnitude of animal suffering numbers, rather than other relevant factors? Just as the identifiable victim effect leads people to overweight individual stories, maybe we’re overweighting astronomical numbers.
EAs pride themselves on scope sensitivity to combat emotional biases, but taken to an extreme, could this create its own bias? Are we sometimes too seduced by bigger numbers = bigger problem? The meta-principle might be that any framework, even one designed to correct cognitive biases, needs wisdom and balance to avoid becoming its own kind of distortion.
I’m pretty confident that Marketing is in the top 1-3 skill bases for aspiring Community / Movement Builders.
When I say Marketing, I mean it in the broad sense it used to mean. In recent years “Marketing” = “Advertising”, but I use the classic Four P’s of Marketing to describe it.
The best places to get such a skill base is at FMCG / mass marketing organisations such as the below. Second best would be consulting firms (McKinsey & Company):Procter & Gamble (P&G)
Unilever
Coca-Cola
Amazon
1. Product
- What you’re selling (goods or services)
- Features and benefits
- Quality, design, packaging
- Brand name and reputation
- Customer service and support2. Price
- Retail/wholesale pricing
- Discounts and promotions
- Payment terms
- Pricing strategy (premium, economy, etc.)
- Price comparison with competitors3. Place (Distribution)
- Sales channels
- Physical/online locations
- Market coverage
- Inventory management
- Transportation and logistics
- Accessibility to customers4. Promotion
- Advertising
- Public relations
- Sales promotions
- Direct marketing
- Digital marketing
- Personal selling
Flaming hot take: if you think Digital Sentience should be taken seriously but not Human Awakening / Enlightenment, then EA culture might have its hooks in a bit deep.
TLDR you can experience Awakening now, or, you can wait 3 years and an LLM will do it for you.
I find periodically reading stories like this pretty inspiring. Thanks for posting!