Thoughts on my relationship to EA (and please donate to PauseAI US)
Hey, EAs. Feels a little strange to address you through EA forum since I stopped considering myself one of you. For those who don’t know I was an EA leader for a decade, organizing at Harvard and working at Rethink Priorities before pursuing PauseAI forced me to quit my job and eventually give up all the comforts of a community I loved. I stopped commenting on the forum (unless I have to clear up org stuff) after the mods reprimanded me for being upset about many of you being the problem. (That same week I had a curated post. lol, lmao.) I didn’t want to lose the high karma account and I felt like I had basically been told I can’t make my case here if it makes people feel bad so I’ve mostly gone dormant.
Still, seems like PauseAI and EA have a lot of common cause and should be working together, right? Alas, no. The main reason I didn’t participate in the donation election this year was that my Dad was dying and I didn’t have time. It’s not like I didn’t do it on principle or anything. I would have done it with more time, and I was invited to do so, but it wasn’t a high enough fundraising or org priority to do it under time constraints and I’m going to say a little about why. It’s revealing about the issues with my work and EA in general.
The experience I had in the donation election last year was of spending tons of time answering very demanding questions, doing really well in the rankings, and still not getting the election money. We did get some separate donations because the donors read the discussion. But it wasn’t high-yield. It was worth my time financially, but maybe not counterfactually. I had hoped it would help EAs to understand PauseAI and be worth it for that reason. But I no longer think it is worth my time or energy to try to convince you.
I think, if it survives at all, EA will eventually split into pro-AI industry, who basically become openly bad under the figleaf of Abundance or Singulatarianism, and anti-AI industry, which will be majority advocacy of the type we’re pioneering at PauseAI. I think the only meaningful technical safety work is going to come after capabilities are paused, with actual external regulatory power. The current narrative (that, for example, Anthropic wishes it didn’t have to build) is riddled with holes and it will snap. I wish I could make you see this, because it seems like you should care, but you’re actually the hardest people to convince because you’re the most invested in the broken narrative.
I don’t think talking with you on this forum with your abstruse culture and rules is the way to bring EA’s heart back to the right place— where the reward is not status or friends or working on your favorite technology, but helping people and animals. What will change your minds is Pause sentiment becoming more popular, not me beating you in argument. I’m already beating you and you just define the game so that the conclusion of moving toward advocacy can’t win. Or you interpret justified, straightforward disapproval at your complicity in the greatest risk of our time as simply too impolite.
You’ve lost the plot, you’re tedious to deal with, and the ROI on talking to you just isn’t there. I’ve taken a lot of emotional damage having a group that was kind of my extended family force me to choose between them and doing the right thing. I can’t disentangle my work from EA, but I’m giving myself a lot of space.
I think you’re using specific demands for rigor (rigor feels virtuous!) to avoid thinking about whether Pause is the right option for yourselves. Like if you can make me seem incompetent, that means you can forget about AI Safety advocacy or international cooperation or Pause as avenues. Case in point: EAs wouldn’t come to protests, then they pointed to my protests being small to dismiss Pause as a policy or messaging strategy!
I believe many of you will eventually side with me, and you will be welcomed into PauseAI whether you identify with EA by then or not. But PauseAI is not an EA organization. (That ship sailed in 2023 when Open Phil gave me nonsensical reasons they wouldn’t explore Pause as a policy or advocacy, which I now know they did because they serve Anthropic’s interests. They just lied to me and tried to make me feel like I was stupid to make me drop it. Many of you took your cues from them.) Something EAs tend to misunderstand is that PauseAI is not an ideology— it’s a grassroots coalition of all types of people who think a Pause would be good. PauseAI is for everyone. This is one of my favorite things about it.
EA used to be about doing the most good wherever it could be found, and that used to take people a lot of places— spreadsheets, yes, but also RCT field trials and pledge drives and giving games. Now it’s about working at an AI lab or wishing you could work at an AI lab. Most of you can’t do that. (Which is great, because it’s evil! It is literally being the problem.) Aren’t you bored of hanging on around here watching AI developments like a spectator sport and fanboying for the cool kids? Some of you are too young to remember what it was like to be surrounded by people whose reward was helping people, and who were excited and honored to have the chance to do unglamorous high-impact work others were unwilling to do. It was inspiring— spiritual food and a shining example to me— and that is the energy in PauseAI now. Because no one is here for an in-group or glory or an easy life. We’re here to protect the world.
—————————————————————
Our projected budget this year (with no major upgrades, in line with the previous year that produced the results on our flyer) is $440k. Of course with all of the infrastructure we’ve built and the community of volunteers we have amassed in 2025, our 2026 results will go much further. My 2026 raise goal is $1M, both because we need runway and so we have the security to hire beyond our 3-person staff.
Donate here instantly: https://www.zeffy.com/en-US/donation-form/donate-to-help-pause-ai
If you’re serious about donating over $1k, you can contact me to talk in more detail.
I’m very sorry to hear about your dad. I hope those who would have voted for PauseAI in the donation election will consider donating to you directly.
On the points you raise, one thing stands out to me: you mention how hard it is to convince EAs that your arguments are right. But the way you’ve written this post (generalising about all EAs, making broad claims about their career goals, saying you’re already beating them in arguments) suggests to me you’re not very open to being convinced by them either. I find this sad, because I think that PauseAI is sitting in an important space (grassroots AI activism), and I’d hope the EA community & the PauseAI community could productively exchange ideas.
You’re right, I’m not.
What I feel upset about is that EA isn’t the kind of group anymore that wanted to do grassroots advocacy for AI Safety. Early EA would have been all over it. Now EAs want to be part of building AI. I’m not wanting like a trade where you listen to me and I listen to you in exchange. I know your arguments in and out. You are just wrong and you don’t care about finding out what is right— you’re protecting your conclusions. That’s a betrayal to yourselves.
There’s many different people in EA with different takes.
By claiming “you are just wrong” in second person plural you are making it harder to people that are not in the “want to build AI” camp to engage with your object level arguments.
Why don’t you defend your point?
I imagine the people that are not part of the AI safety memeplex already could find them convincing. Why not engage with then?
Btw I’m undecided on what the right marginal actions are wrt AI and am trying to form my inside view.
Maybe reconsider whether EA is the right community for you if you don’t agree with the agenda of the people at the top. They are setting your ability to think critically in many places, with who they fund, who is treated as cool and respected as an expert, etc.
You’re right, part of the problem is that you feel lumped in with them even if you have no decisionmaking power over what they do. Don’t fight their battles for them if you don’t even agree— let go of the baggage and think for yourself.
I feel lumped in with them because you use second person plural. It’s not a glitch, it’s a direct consequence of how you write.
What I say is: maybe you’re right with the pause agenda, I don’t know.
But if you come to a group of people saying “you are just wrong” this is not engaging, and then I feel irritated instead of considering your case.
You feel lumped in with them bc you identify as an EA.
Sometimes the truth irritates.
I don’t identify as EA. You can check my post history. I try to form my own views and not defer to leadership or celebrities.
I agree with you that there’s a problem with safetywashing, conflicts of interest and bad epistemic practises in mainstream EA AI safety discourse.
My problem with this post is that the way of presenting the arguments is like “wake up, I’m right and you are wrong”, directed to a group of people that includes people that have never thought about what you’re talking about, and people that agree with you.
I also agree that the truth sometimes irritates, but that doesn’t mean that if something irritates I should trust it more.
Is this directed at me? Because I didn’t want to do this, and I don’t see why you think I did this (like, I clearly never threatened not to care about a problem?).
If I take the way that you’ve used “you” in your post and in the comments here seriously, you’ve said a bunch of things that I believe are clearly not true:
No actually I posted that response under the wrong comment— sorry!
I can actually read most of this and feel understanding, but pieces like “I think, if it survives at all, EA will” or “I’m already beating you and” strain that capacity quite a bit.
You do actually disagree with some people, and maybe making that clear and spelling it out is worth it. But you’re taking further people, who could be sympathetic but are still deciding how they feel, and pushing them away by trying to paint a community they may care about as hollow and death-bound.
As far as I can tell, posts like this don’t help anyone, neither you, nor Pause, nor EA. You’re expecting antagonism to wake people up, but is that really an effective strategy for building support? Look at your donors, look at those who are still more aligned with EA than you are. Did they come from one of the many angry-style posts you’ve written recently, or one of the earlier or more substantive ones arguing for the core of Pause and why it’s needed? You know your donors better, but I know where I’d be making my bet.
I think an antagonistic tone actually works well in recruiting folks who are still EA-adjacent, and may still be somewhat-affiliated with the community, or otherwise care a lot about some EA-branded cause areas like AI, but are weary of the discourse and cultural norms and professionalization of the space. For a space that supposedly loves criticism, EA really doesn’t make real space for criticizing a lot of key assumptions and orgs, and it often feels like, if you car about certain causes that aren’t mainstream outside of the movement, you either stick around and keep your mouth shut, or stop seeking to help with those causes. Someone like Holly taking an antagonistic tone means that there are others out there who you could meet and organize with who might think about things in an EA-ish, systematic way...but who aren’t contained by organizational allegiance. And, I’d argue, that integrity is a breath of fresh air and I suspect is very effective in attracting disillusioned EAs
Disagreement is cool and awesome. Even intense disagreement (“I think this view is deeply misguided”). I really see no room for antagonism among two people that could be having an epistemically healthy conversation.
Fair enough. Would you consider yourself one of those disillusioned EAs that’s been attracted by the message?
Like Noah said, disagreement is great, closed-mindedness and antagonism is not.
Thanks for confirming one of the problems I wrote about— here you are threatening not to care about a problem in the world because I made you uncomfortable. This is a constant threat from EAs, that I or the cause are not gonna have their support if I don’t fall in line with what they want to hear.
1) You should care to think about this because of the impact on the world. No matter how much I rub you the wrong way, you should have a burning curiosity to figure out if grassroots is promising. But you don’t— you want me to beg you to please consider it as a favor. No.
2) I already don’t have your support! You can’t threaten to take away something you are already withholding. You hold no cards. Like the AI companies at this point, PauseAI doesn’t need you. But if you want help because you came to that conclusion about how to help the world, great!
I think if your approach is causing you to think that Tristan is “threatening not to care” about AI risk, then you’re really missing the mark, Holly.
Tristan demonstrably has made pretty big personal sacrifices to work on AI, literally worked with Felix on your team on an AI Safety Camp project about arguing that grassroots Congressional outreach is good (I was also working on that team), and is continuing to look for opportunities to work on AI risk reduction during and after grad school.
Tristan is, in short, the kind of person that if you were looking to hire another person in DC, I’d be recommending to you to consider. He very much is aligned with your core strategy! If I had to guess, I’d guess that he considers himself to be a supporter of PauseAI US’s approach!
Given how you’re engaging on this thread, I’ll bet that you’ll reply to this post by saying something like, “see, his response this proves how pernicious EA culture is, that it can corrupt even people who should be on board.” I would politely ask you to consider the possibility instead that, at least sometimes, you’re shooting at the wrong targets.
I like you, Dave, but you don’t get this part.
I don’t think pointing out problems with the effectiveness of your approach is the same as “threatening not to care”.
I don’t see many productive ways this continues so I’ll keep it short.
If someone thinks you’re making poor decisions, and wants to see your downfall, silence is the best way to go about that. Engaging with you further is not, and should probably clue you in that that’s not their primary motivation.
Chalking up anything negative anyone says about your work as part of The Big Plot Against You closes the door to productive conversation quickly. You entrench yourself and flag that there’s little chance you change your mind, and I then question why I’m responding.
“PauseAI doesn’t need you” takes the door you were already closing and slams it shut. I truly hoped for better.
EAs can take any excuse they want not to join PauseAI, these^ are all great. I want people to come to the movement bc they want to pursue that intervention, not bc I was nice to them and never challenged their ideology. And, yes, there is a big world, so we don’t need you if you’re conflicted. I’d like you to at least doubt yourselves before you cause more damage as EA, though.
I have Thoughts about the rest of it, which I am not sure whether I will write up, but for now: I am sad for your Dad’s death and glad you got to prioritise spending some time with him.
I expect there is a fair bit we disagree about, but thanks for your integrity and effort and vision.
Holly --
Thanks for this assertive, candid, blunt, challenging post.
You and I have, I think, reached similar views on some of the critical weaknesses of EA as it’s currently led, run, funded, and defended.
All too often, ‘EA discourse norms’ have been overly influenced by LessWrong discourse norms, where an ivory-tower fetishization of ‘rational discourse’, ‘finding cruxes’, ‘updating priors’, ‘avoiding ad hominems’, ‘steel-manning arguments’, etc becomes a substitute for effective social or political action in the world as it is, given human nature as it is, and given the existential risks that we actually face.
And, recently, way too many EAs have been seduced into the Dario Amodei delusion that if the ‘good guys’ build Artificial Superintelligence, with good intentions, and enough effort on ‘technical AI alignment’, we’ll all be fine.
That’s a great excuse for EAs going over to 80k Hours, which still (unbelievably, and utterly immorally) posts dozens of ‘AI safety’ jobs at Anthropic and OpenAI, and getting that sweet, sweet salary to live in the Bay Area, hang out with the cool kids, and pretend you’re doing good. (When, in fact, you’re being used as a safety-washing prop by some of the most reckless corporations on Earth.)
People respond to incentives. Even EAs.
And if your prospects of being hired by Anthropic for a mid-6-figure salary doing corporate safety-washing depend on not making a fuss, and denouncing Pause AI, and ignoring the passion and dedication of those who believe ASI is actually an extinction risk, then it’s tempting to many EAs to ignore your message, downvote your post (and probably this one), and carry on as usual, feeling virtue about donating a bit of money to saving some shrimp, or whatever.
I’m extremely saddened by the dismissal of Pause AI by mainstream EAs. While it was in 2019 and therefore basically another lifetime, I well remember a time when we had many enthusiastic people in EA with time and energy that we were struggling to put to work, leading to all that discussion of “Task Y” and whatnot, but we let them turn into bycatch instead. It seems now that something like Pause AI should always have been an option, and ideally THE option. Thank you for seeing clearly what others haven’t and refuse to.
Sorry about your father.
I think there’s a much more mundane and much more epistemically healthy way to understand this disagreement.
Perhaps this is naive but my view currently just is: most people are disagreeing about a few concrete empirical and strategic parameters: the likely effectiveness and public reception of a Pause movement, how much meaningful safety work can be done from inside labs, and (probably) estimates of p(doom). Given how uncertain and high-stakes these questions are, it seems completely unsurprising that reasonable people would land in very different places.
It’s fine to worry about incentives and institutional bias — that could matter — but treating this as if the disagreement is obviously resolved, or as if it cleanly divides the world into “accelerationists” and “pause-ers,” strikes me as bad epistemics.
I actually think the disagreement goes much deeper than the parameters you list, and they seem like simple parameters to you bc you are taking a lot of EA assumptions for granted.
If people in EA want to act more supportive of PauseAI to prove me wrong then be my guest.
Can you say more about what these EA assumptions are?
It’s a foregone conclusion for EAs that “AI Safety” involves being on the good side of the AI labs. Most of the reasons they dismiss Pause come down to how they think it would compromise their reputation with and access to industry. It’s hard to get them to even consider not cozying up to the labs because technical safety is what they trained to do and is the highest status.
A nested assumption from there is that partial, marginal improvements in technical safety work but anything less than achieving a full international Pause would mean the PauseAI strategy failed. I anticipate having to explain to you how sentiment rallying works and how moving the Overton window is helpful to many safety measure short of Pause— most EAs have very all or nothing thinking about this, such that they think PauseAI is a Hail Mary instead of a strategy that works at all doses. This is usually bc they know very little about social movements.
EAs tend to be very allergic to speaking effectively for advocacy, and they believe that using simpler statements that they consider to be unnuanced is going to reflect negatively on the cause because they are trying to impress industry insiders.
EAs have ~zero appreciation for the psychological difficulty of “changing the AI industry from within”. They are quickly captured and then rationalize together, their tools of discourse too nuanced and flexible to give them any clear conclusions when they can make themselves outs instead. When I say this difficulty makes it a very unrealistic intervention with high backfire potential, EAs think they are proving me wrong by saying that the greatest outcome of all would be to influence to from within and get AI benefits, so that’s what they have to pursue.
I think I disagree with a bunch of what you said there: I don’t think good AIS involves being on good side of AI labs necessarily (tho I think there are good arguments for this), I think large movement building without getting a full pause would be a big win for PauseAI and that many EAs would agree with this (despite the fact that I and maybe them know little about social movements), and I do think making simple statements reflects negatively from a whole host of perspectives and should be taken pretty seriously.
I’d be interested in hearing how you/others at pause AI have tracked how much of this marginal improvements to advocacy you have been doing so far is. What are the wins? What were the costs? Happy to also get on a call with someone about this.
If there are real numbers on this (I know it’s hard in spaces like yours), I’d be curious about hearing why they aren’t often posted on the forum/LW, as that, I think, would be more helpful to people (and tracking the cost-effectiveness of AIS in general is super underrated, so this would be good).
If you don’t have the numbers, I would ask why you are so confident that this is actually the right approach.
What stunt at EAG?
Oh, sorry. I asked my friend, and they said it was Stop AI—not Pause AI. They basically protested in the middle of a talk with the CEA CEO saying things like he is a murderer, which I just think is pretty nuts. That is my bad, though. I have edited that part out of my initial comment.
Yeah they really did us dirty by basically stealing our name when I kicked the founders out of PauseAI because they want to do illegal things and disruptive stunts like that.
Yea, sorry about that. That really sucks.
Does the evidence support a conclusion that EAs as a whole have some sort of consensus that is against pause advocacy and/or PauseAI US? The evidence most readily available to me seems mixed.
PauseAI US’ fundraising challenges suggest that the major funding sources are—at a minimum—not particularly excited about the org.[EDIT: Struck this out in light of Holly’s comment below.]I don’t have any knowledge about PauseAI’s success among rank-and-file EA donors (and PauseAI may not have good insight here either, given that donations don’t come with an EA flag on them and the base rate of EA-aligned donations to a random org in the AI space may be hard to discern).
PauseAI ranked very well in last year’s donation election, although it didn’t quite end up in the money.
Holly’s posts relating to AI issues have on average received significant karma on net over the past ~2 years, such as:
To be fair, other posts have low (but still positive) karma. Some of these I would characterize as sharp in tone—to be clear, I am using sharp in a descriptive sense, trying to avoid any evaluation of the tone here. For example, the title and first paragraph of this post claim that a significant number of readers have been “[s]elling out,” “are deluded,” and need to “[w]ake up.” My recollection is that sharply toned posts tend to incur a karma penalty irrespective of the merits of the perspective offered (unless the target is, e.g., SBF). But although the net karma on that post is only +3, that is on 60 votes—suggesting quite a bit of upvotes to counter the downvotes.
Holly also has some high-karma comments which express a lot of frustration with EA being too cozy with Big AI, such as this (+51) and this (+39). There are also comments with net negative karma, some of which are very sharp and some of which I think are not reasonably explainable on that basis.
There’s of course much more to EA than the Forum, but its metrics have the advantage of being quantifiable and thus maybe a little less vibes-based than some competing measures.
I think a lot of the younger and less involved people interacting with this site like the Pause position just like 70-80% of the public like it if it’s explained to them. I wish those people were just as much EA as the Bay Area community doing direct work, but by design they are not. That core community is where the beef comes from. iirc correctly you don’t do EA stuff outside this forum so you wouldn’t know except for what those people post here.
The way you phrased that came off as insulting, and it took me a while to realize you weren’t saying “you don’t have money bc you suck”.
We have many funders, including some you probably meant to refer to here like FLI and SFF. It was Open Phil who gave me the runaround bc they want Anthropic to win. We are not particularly “challenged” at fundraising for an org at our stage, especially considering I never raised before PauseAI and I’ve been the sole raiser. There’s lot of other money in the world. We’re probably healthier with our funds in terms of diversity and robustness and maintaining mission control than most EA charities who get all the money from OP/LTFF.
Thanks for the clarification; I struck that bullet point from my comment. Sorry that my phrasing didn’t accomplish what I meant to say—that a non-funding decision would be consistent with anything between the funder being strongly opposed to the organization and the funder concluding that it was just under their bar. I’m glad to hear PauseAI is doing better with fundraising than I thought.
Follow-ups you may not have seen because they are already downvoted to hell.
https://forum.effectivealtruism.org/posts/DDtiXJ6twPb7neYPB/you-can-just-leave-ea
https://forum.effectivealtruism.org/posts/ije4YiHzwBECBDMCQ/eas-would-mostly-not-have-been-abolitionists