What you quoted is more like voluntary safety commitments— I think both anti-slavery and AI Safety should not be left to the whims of the guilty parties to carry out.
Holly Elmore ⏸️ 🔸
“Influence from within” on AI doesn’t work, so bye bye
Follow-ups you may not have seen because they are already downvoted to hell.
https://forum.effectivealtruism.org/posts/DDtiXJ6twPb7neYPB/you-can-just-leave-ea
Cool, let’s just redefine “alignment” and call it a day. Wait, isn’t that kind of what you’re doing?
Yeah, freeing slaves, but not necessarily abolishing the institution. I’m not trying to be difficult— I think this difference in goals is the point.
And it’s fine if you want to bite the bullet and say you wouldn’t be a radical abolitionist, but most modern people think they would have.
Yeah they really did us dirty by basically stealing our name when I kicked the founders out of PauseAI because they want to do illegal things and disruptive stunts like that.
What’s your point? Tolerating slavery would have been fine as long as they thought it was wrong in theory?
This isn’t abolitionism. Manumission means “letting go”.
Ah so your objection is that the issue is not important enough to compare to slavery in this way? Interesting, wasn’t expecting that here. Are you saying that more EAs would have been abolitionists because slavery was a tangible harm? Looking around at the number of EAs who aren’t vegetarians or animal donors I’m not so sure of that.
The people who want to work with the system on AI Safety generally do believe AI risk is that important, though. Or at least they say that.
EAs would mostly not have been abolitionists
There’s no need for a group like yours to be implicated in AI company wheeling and dealing. Being connected to EA’s decisions has probably made the issue much more confusing for you than it should be— PauseAI is suited for local groups and only involves talking about the danger and giving grassroots support to AI Safety bills. That should obviously have been the sort of thing local EA groups did for AI Safety, but the AI Safety part of EA has always been this weird elitist conspiracy to have stake in the Singularity.
So will you join me in denouncing the horrible mistakes around AI Safety like working with the labs that the actual people in control of the name EA have made?
Why use the EA name? There is a leadership and they’re telling people where to donate that money and how to think. You have some responsibility for that.
EAs can take any excuse they want not to join PauseAI, these^ are all great. I want people to come to the movement bc they want to pursue that intervention, not bc I was nice to them and never challenged their ideology. And, yes, there is a big world, so we don’t need you if you’re conflicted. I’d like you to at least doubt yourselves before you cause more damage as EA, though.
I like you, Dave, but you don’t get this part.
.It’s just very convenient for people to say they mean their own thing by EA. If that’s true, and the leadership bears the responsibility for the problems, idk why me criticizing EA would be a problem, and yet many rank-and-file perceive it as an identity attack on them. So which is it?
If you claim the strength of the unity of EA you can’t disclaim the weaknesses of the parts.
Okay, so what responsibility will you take for EA’s failings?
What stunt at EAG?
It’s a foregone conclusion for EAs that “AI Safety” involves being on the good side of the AI labs. Most of the reasons they dismiss Pause come down to how they think it would compromise their reputation with and access to industry. It’s hard to get them to even consider not cozying up to the labs because technical safety is what they trained to do and is the highest status.
A nested assumption from there is that partial, marginal improvements in technical safety work but anything less than achieving a full international Pause would mean the PauseAI strategy failed. I anticipate having to explain to you how sentiment rallying works and how moving the Overton window is helpful to many safety measure short of Pause— most EAs have very all or nothing thinking about this, such that they think PauseAI is a Hail Mary instead of a strategy that works at all doses. This is usually bc they know very little about social movements.EAs tend to be very allergic to speaking effectively for advocacy, and they believe that using simpler statements that they consider to be unnuanced is going to reflect negatively on the cause because they are trying to impress industry insiders.
EAs have ~zero appreciation for the psychological difficulty of “changing the AI industry from within”. They are quickly captured and then rationalize together, their tools of discourse too nuanced and flexible to give them any clear conclusions when they can make themselves outs instead. When I say this difficulty makes it a very unrealistic intervention with high backfire potential, EAs think they are proving me wrong by saying that the greatest outcome of all would be to influence to from within and get AI benefits, so that’s what they have to pursue.
I thought EA was too eager to accept fault for a few people committing financial crimes out of their sight. The average EA actually is complicit in the safetywashing of OpenAI and Anthropic! Maybe that’s why the don’t want to think about it…