I’m curious to understand better where people disagree with this comment.
Tobias Häberli
I don’t think this quite works as a response to Alene’s point. Many things are necessary/valuable preconditions for doing good. We need food, water, functioning infrastructure, preserving democracy, the internet, etc. The fact that something is a precondition for other work doesn’t by itself make it a high-priority EA cause area.
If I apply the ITN framework to ‘preserving democracy’, I get something like:
Importance: Not losing democracy is very important. But losing it was arguably similarly catastrophic e.g. 10 years ago. The question is how much the probability has actually increased. Even though the probability seems larger right now, I expect it to still be relatively small – but I’m uncertain.
Neglectedness: Very low. I agree with Alene’s core point that it’s one of the least neglected causes right now.
Tractability: I’d argue somewhat low, though I’m highly uncertain. There’s little reason to believe there’s lots of low-hanging fruit that hasn’t been picked over decades and centuries of interest in making democracies stable.
It’s also worth noting that much of the current concern is specifically about US democracy, which matters a lot (largest economy, major influence on the rest of the world, where AI is mostly going to be built), and tractability is currently plausibly higher but that’s a narrower cause than ‘preserving democracy’ (e.g. by reducing global democratic backsliding) full stop.
Thanks for this post – really would have liked having such a filter in the past.
We estimate that The Vegan Filter could cut the convenience barrier roughly in half by addressing the “supermarket barrier,” one of the largest friction points for new vegans.
Can you say more about why you estimate this to half the convenience barrier?
I expect this to be much lower, maybe cutting the inconvenience of being vegan by 1-5%. The filter could still be worth the effort, of course :)
- Tobias Häberli 29 Dec 2025 8:50 UTC16 points0 ∶ 0in reply to: Mjreard’s comment on: Pagw’s Shortform
which I don’t think Veganuary is.
Seems true. Looking at google trends, ‘veganuary’ is a lot less searched for than ‘movember’.
And I’d suspect that ‘movember’ isn’t all that well-known either. For example, comparing it to black history month.
- Tobias Häberli 23 Dec 2025 19:11 UTC7 points0 ∶ 0
Error
The value NIL is not of type SIMPLE-STRING when binding #:USER-ID162
I’m very sorry to hear about your dad. I hope those who would have voted for PauseAI in the donation election will consider donating to you directly.
On the points you raise, one thing stands out to me: you mention how hard it is to convince EAs that your arguments are right. But the way you’ve written this post (generalising about all EAs, making broad claims about their career goals, saying you’re already beating them in arguments) suggests to me you’re not very open to being convinced by them either. I find this sad, because I think that PauseAI is sitting in an important space (grassroots AI activism), and I’d hope the EA community & the PauseAI community could productively exchange ideas.
In cases where there is an established science or academic field or mainstream expert community, the default stance of people in EA should be nearly complete deference to expert opinion, with deference moderately decreasing only when people become properly educated (i.e., via formal education or a process approximating formal education) or credentialed in a subject.
If you took this seriously, in 2011 you’d have had no basis to trust GiveWell (quite new to charity evaluation, not strongly connected to the field, no credentials) over Charity Navigator (10 years of existence, considered mainstream experts, CEO with 30 years of experience in charity sector).
But, you could have just looked at their website (GiveWell, Charity Navigator) and tried to figure out yourself whether one of these organisations is better at evaluating charities.
I am extremely skeptical of any claim that an individual or a group is competent at assessing research in any and all extant fields of study, since this would seem to imply that individual or group possesses preternatural abilities that just aren’t realistic given what we know about human limitations.
This feels like a Motte (“skeptical of any claim that an individual or a group is competent at assessing research in any and all extant fields of study”) and Bailey (almost complete deference with deference only decreasing with formal education or credentials). GiveWell obviously never claimed to be experts in much beyond GHW charity evaluation.
> early critiques of GiveWell were basically “Who are you, with no background in global development or in traditional philanthropy, to think you can provide good charity evaluations?”
That seems like a perfectly reasonable, fair challenge to put to GiveWell. That’s the right question for people to ask!
I agree with this if you read the challenge literally, but the actual challenges were usually closer to a reflexive dismissal without actually engaging with GiveWell’s work.
Also, I disagree that the only way we were able to build trust in GiveWell was through this:
only when people become properly educated (i.e., via formal education or a process approximating formal education) or credentialed in a subject.
We can often just look at object-level work, study research & responses to the research, and make up our mind. Credentials are often useful to navigate this, but not always necessary.
- Tobias Häberli 11 Dec 2025 8:47 UTC2 points0 ∶ 0in reply to: Yarrow Bouchard 🔸’s comment on: The funding conversation we left unfinished
Dustin Moskovitz’s net worth is $12 billion and he and Cari Tuna have pledged to give at least 50% of it away, so that’s at least $6 billion.
I think this pledge is over their lifetime, not over the next 2-6 years. OP/CG seems to be spending in the realm of $1 billion per year (e.g. this, this), which would mean $2-6 billion over Austin’s time frame.
- Tobias Häberli 10 Dec 2025 17:57 UTC6 points1 ∶ 0in reply to: Yarrow Bouchard 🔸’s comment on: The funding conversation we left unfinished
lots of money will also be given to meta-EA, EA infrastructure, EA community building, EA funds, that sort of thing?
You’re probably doubting this because you don’t think it’s a good way to spend money. But that doesn’t mean that the Anthropic employees agree with you.
The not super serious answer would be: US universities are well-funded in part because rich alumni like to fund it. There might be similar reasons why Anthropic employees might want to fund EA infrastructure/community building.
If there is an influx of money into ‘that sort of thing’ in 2026/2027, I’d expect it to look different to the 2018-2022 spending in these areas (e.g. less general longtermist focused, more AI focused, maybe more decentralised, etc.).
- Tobias Häberli 10 Dec 2025 17:53 UTC6 points1 ∶ 0in reply to: Yarrow Bouchard 🔸’s comment on: The funding conversation we left unfinished
Given Karnofsky’s career history, he doesn’t seem like the kind of guy to want to just outsource his family’s philanthropy to EA funds or something like that.
He was leading the Open Philanthropy arm that was primarily responsible for funding many of the things you list here:
or do you think lots of money will also be given to meta-EA, EA infrastructure, EA community building, EA funds, that sort of thing
- Tobias Häberli 10 Dec 2025 17:48 UTC24 points7 ∶ 0in reply to: Nathan Young’s comment on: The funding conversation we left unfinished
Had Phil been listened to, then perhaps much of the FTX money would have been put aside, and things could have gone quite differently.
My understanding of what happened is different:
Not that much of the FTX FF money was ever awarded (~$150-200million, details).
A lot of the FTX Future Fund money could have been clawed back (I’m not sure how often this actually happened) – especially if it was unspent.
It was sometimes voluntarily returned by EA organisations (e.g. BERI) or paid back as part of a settlement (e.g. Effective Ventures).
@Daniel_Dewey, can you prove this song wrong?
I’m somewhat surprised about the lack of information about Anthropic employee’s donation plans.
Potential reasons:
They are all working full-time (probably more) and it’s really hard to get clarity on your own donation plans in such a situation. And communicating about them is even harder.
They might have specific plans but talking about them publicly is tricky. It might imply information about Anthropics plans (e.g. regarding IPO) or about the internal sentiment about the prospect of Anthropic gaining/losing value in the future. Or just plain old ‘what happens to your inbox once you imply that you’re going to be donating >10M soon?’.
They might not see a lot of benefit of communicating publicly about this. Maybe they are chatting with Coefficient Giving about their plans. Maybe they are planning their own foundation.
There might just not be that many people with significant wealth at Anthropic who are planning on donating effectively anytime soon. This could be because of value drift, because they expect their assets to increase in value and want to donate later, because they don’t see great donation opportunities yet.
Interested to hear whether I’ve missed a major consideration and whether people have takes about which of these reasons is most likely/explanatory.
The Stop AI response posted here seems maybe fine in isolation. This might have largely happened due to the Stop AI co-founder having a mental breakdown. But I would hope for Stop AI to deeply consider their role in this as well. The response of Remmelt Ellen (who is a frequent EA Forum contributor and advisor to Stop AI) doesn’t make me hopeful, especially the bolded parts:
An early activist at Stop AI had a mental health crisis and went missing. He hit the leader and said stuff he’d never condone anyone in the group to say, and apologized for it after. Two takeaways:
- Act with care. Find Sam.
- Stop the ‘AGI may kill us by 2027’ shit please.[...]
I advised Stop AI organisers to change up the statement before they put it out. But they didn’t. How to see this is is a mental health crisis. Treat the person going through it with care, so they don’t go over the edge (meaning: don’t commit suicide). 2/
The organisers checked in with Sam everyday. They did everything they could. Then he went missing. From what I know about Sam, he must have felt guilt-stricken about lashing out as he did. He left both his laptop and phone behind and the door unlocked. I hope he’s alive. 3/
Sam panicked often in the months before. A few co-organisers had a stern chat with him, and after that people agreed he needed to move out of his early role of influence. Sam himself was adamant about being democratic at Stop AI, where people could be voted in or out. 4/
You may wonder whether that panic came from hooking onto some ungrounded thinking from Yudkowsky. Put roughly: that an ML model in the next few years could reach a threshold where it internally recursively improves itself and then plan to take over the world in one go. 5/
That’s a valid concern, because Sam really was worried about his sister dying out from AI in the next 1-3 years. We should be deeply concerned about corporate-AI scaling putting the sixth mass extinction into overdrive. But not in the way Yudkowsky speculates about it. 6/
Stop AI also had a “fuck-transhumanism” channel at some point. We really don’t like the grand utopian ideologies of people who think they can take over society with ‘aligned’ technology. I’ve been clear on my stance on Yudkowsky, and so have others. 7/
Transhumanist takeover ideology is convenient for wannabe system dictators like Elon Musk and Sam Altman. The way to look at this: They want to make people expendable. 8/
[...]
Thanks a lot for engaging!
One general point: My rough guess is that acceptance rates have stayed largely constant across AI safety programs over the last ~2 years because capacity has scaled with interest. For example, Pivotal grew from 15 spots in 2024 to 38 in 2025. While the ‘tail’ likely became more exceptional, my sense is that the bar for the marginal admitted fellow has stayed roughly the same.
They might (as I am) be making as many applications as they have energy for, such that the relevant counterfactual is another application, rather than free time.
The model does assume that most applicants aren’t spending 100% of their time/energy on applications. However, even if they were, I feel like a lot of this is captured by how much they value their time. I think that the counterfactual of how they spend their time during the fellowship period (which is >100x more hours than the application process) is the much more important variable to get right.
you also need to consider the intangible value of the counterfactual
This is correct. I assumed most people would take this into account (e.g. subtract their current job’s networking value from the fellowship’s value), but I might add a note to make this explicit.
you also ought to consider the information value of applying for whatever else you might have spent the time on
I’m less worried about this one. Since we set the fixed Value of Information quite conservatively already, and most people aren’t constantly working on applications, I suspect this is usually small enough to be noise in the final calculation.
there is a psychological cost to firing out many low-chance applications
I agree this is real, but I think it’s covered in the Value of Your Time. If you earn £50/hr but find applying on the weekend fun/interesting, you might set the Value of Your Time at £5/hr. If you are unemployed but find applying extremely aversive, you might price your time at e.g., £200/hr.
Should I Apply to a 3.5% Acceptance-Rate Fellowship? A Simple EV Calculator
- Tobias Häberli 20 Nov 2025 12:36 UTC4 points0 ∶ 0in reply to: DavidNash’s comment on: Open Philanthropy Is Now Coefficient Giving
Expecting “cogi ergo multiply” merch now...
9+ weeks of mentored AI safety research in London – Pivotal Research Fellowship
Thanks, that’s useful. I mostly agree with you, and mistakenly read the second bullet point as saying “work that opposes fascism should come from all sides of the political spectrum”, which is something I agree with. I think the OP somewhat assumed that opposing fascism will look like ‘work with your local anti-fascist network’, but I expect much of it could look more like ‘militarising Europe’ (something the political left would typically oppose).