Creating superintelligent artificial agents without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (n.b. we already have AGI).
yanni kyriacos
Hi Marcus, I’m in the mood for a bit of debate, so I’m going to take a stab at responding to all four of your points :)
LMK what you think!
1. This is an argument against a pause policy not the Pause org or a Pause movement. I think discerning funders need to see the differences. Especially if you have thinking on the margin.
2. “Pausing AI development for any meaningful amount of time is incredibly unlikely to occur.” < I think anything other than AGI in less than 10 years is unlikely to occur, but that isn’t a good argument to not work on Safety. Scale and neglectedness matter, as well as tractibility!
”they mainly seem to do a bunch of protesting where they do stuff like call Sam Altman and Dario Amodei evil.”
- Can you show evidence of this please?
3. “Pause AI, the organization, does, frankly, juvenile stunts that make EA/AI safety advocates look less serious.”
- Samesies—can you provide evidence please?
In fact, this whole point seems pretty unjustified. It seems you’re basically arguing that advocacy doesn’t work? Is that correct?
4. “Pause AI’s premise … only makes sense if you have extremely high AI extinction probabilities”
Can you justify this point please? I think it is interesting but it isn’t really explained.
Hi Matrice! I find this comment interesting. Considering the public are in favour of slowing down AI, what evidence points you to the below conclusion?
“Blockading an AI company’s office talking about existential risk from artificial general intelligence won’t convince any standby passenger, it will just make you look like a doomsayer caricature.”
Also, what evidence do you have for the below comment? For example, I met the leader of the voice actors association in Australia and we agreed on many topics, including the need for an AISI. In fact, I’d argue you’ve got something important wrong here—talking about existential risk instead of catastrophic risks to policymakers can be counterproductive because there aren’t many useful policies to prevent it (besides pausing).
“ the space of possible AI policies is highly dimensional, so any such coalition, done with little understanding of political strategy, will risk focusing on policies and AI systems that have little to do with existential risk”
Hi! Interesting comment. To what extent does this also describe most charities spinning out of Ambitious Impacts incubation program?
What a cop-out! Politics is a mind-killer if you’re incapable of observing your mind.
Ten months ago I met Australia’s Assistant Defence Minister about AI Safety because I sent him one email asking for a meeting. I wrote about that here. In total I sent 21 emails to Politicians and had 4 meetings. AFAICT there is still no organisation with significant funding that does this as their primary activity. AI Safety advocacy is IMO still extremely low hanging fruit. My best theory is EAs don’t want to do it / fund it because EAs are drawn to spreadsheets and google docs (it isn’t their comparative advantage). Hammers like nails etc.
Hi Ben! You might be interested to know I literally had a meeting with the Assistant Defence Minister in Australia about 10 months ago off the back of one email. I wrote about it here. AI Safety advocacy is IMO still low extremely hanging fruit. My best theory is EAs don’t want to do it because EAs are drawn to spreadsheets etc (it isn’t their comparative advantage).
Hey mate! I use the light for about 4 hours a day. Which means I’ll get 6.84 years from it.
In case I wasn’t clear, I was suggesting that tech will have progressed far enough in ~ 6.84 years that worrying about a light in a ceiling fan doesn’t make sense.
I went to buy a ceiling fan recently. The salesperson said I might not want a particular model because it had a light with only 10,000 hours in it, and they’ve decommissioned replacements. I told him I wasn’t worried 😭
Flaming hot take: I wonder if some EAs suffer from Scope Oversensitivity—essentially the inverse of the identifiable victim effect. Take the animal welfare vs global health debate: are we sometimes biased by the sheer magnitude of animal suffering numbers, rather than other relevant factors? Just as the identifiable victim effect leads people to overweight individual stories, maybe we’re overweighting astronomical numbers.
EAs pride themselves on scope sensitivity to combat emotional biases, but taken to an extreme, could this create its own bias? Are we sometimes too seduced by bigger numbers = bigger problem? The meta-principle might be that any framework, even one designed to correct cognitive biases, needs wisdom and balance to avoid becoming its own kind of distortion.
I’m pretty confident that Marketing is in the top 1-3 skill bases for aspiring Community / Movement Builders.
When I say Marketing, I mean it in the broad sense it used to mean. In recent years “Marketing” = “Advertising”, but I use the classic Four P’s of Marketing to describe it.
The best places to get such a skill base is at FMCG / mass marketing organisations such as the below. Second best would be consulting firms (McKinsey & Company):Procter & Gamble (P&G)
Unilever
Coca-Cola
Amazon
1. Product
- What you’re selling (goods or services)
- Features and benefits
- Quality, design, packaging
- Brand name and reputation
- Customer service and support2. Price
- Retail/wholesale pricing
- Discounts and promotions
- Payment terms
- Pricing strategy (premium, economy, etc.)
- Price comparison with competitors3. Place (Distribution)
- Sales channels
- Physical/online locations
- Market coverage
- Inventory management
- Transportation and logistics
- Accessibility to customers4. Promotion
- Advertising
- Public relations
- Sales promotions
- Direct marketing
- Digital marketing
- Personal selling
Flaming hot take: if you think Digital Sentience should be taken seriously but not Human Awakening / Enlightenment, then EA culture might have its hooks in a bit deep.
TLDR you can experience Awakening now, or, you can wait 3 years and an LLM will do it for you.
Having a nondual Awakening was the second most important thing to happen to me (after my daughters birth). It has led to incredibly low levels of suffering and incredibly high levels of wellbeing. I write this because I think it is still incredibly under-appreciated and attainable for most people (maybe literally anyone).
There are traditions (Dzogchen, Zen, modern nonduality) where this shift in consciousness can be experienced simply by hearing the right combination of words and insights. As our understanding and tools for communicating these insights evolve, including through advances in AI, I believe this transformative experience could become accessible to many more people.
Yeah I’ve seen that. I think costly-signalling is very real, and the effort to create something formal, polished and thoughtful would go a long way. But obviously i have no idea what else you’ve got on your plate so YMMV
Hello Habryka! I occasionally see you post something OP critical and am now wondering “is there a single post where Habryka shares all of his OP related critiques in one spot?”
If that does exist I think it could be very valuable to do.
I spent some time with Claude this morning trying to figure out why I find it cringe calling myself an EA (I never call myself an EA, even though many in EA would call me an EA).
The reason: calling myself “EA” feels cringe because it’s inherently a movement/community label—it always carries that social identity baggage with it, even when I’m just trying to describe my personal philosophical views.
I am happy to describe myself as a Buddhist or Utilitarian because I don’t think it does those things (at least, not within the broader community context I find myself in—Western, Online, Democratic, Australia, etc).
I mean, I’m mostly riffing here, but according to Claude: “Agriculture independently developed in approximately 7-12 major centers around the world.”
If we ran the simulation 100 times, would AGI appear in > 7-12 centres around the world? Maybe, I dunno.
Anyway, Happy Friday det!
Imagine running 100 simulations of humanity’s story. In every single one, the same pattern emerges: The moment we choose agriculture over hunting and gathering, we unknowingly start a countdown to our own extinction through AGI. If this were true, I think it suggests that our best chance at long-term survival is to stay as hunter-gatherers—and that what we call ‘progress’ is actually a kind of cosmic trap.
I understand that this topic gets people excited, but commenters are confusing a Pause policy with a Pause movement with the organisation called PauseAI.
Commenters are also confusing ‘should we give PauseAI more money?’ with ‘would it be good if we paused frontier models tomorrow?’
I’ve never seen a topic in EA get a subsection of the community so out of sorts. It makes me extremely suspicious.