I’m sad to announce that I’m leaving academia.
I’m looking forward to working on AI safety.
I’m sad to announce that I’m leaving academia.
I’m looking forward to working on AI safety.
Good point, they might know. Does anyone know a neurotypical? Or a friend of a neurotypical that I could reach out to?
Attacking people’s outfit choices is a unique low for the EA forum!
Thanks for writing this!
You’re describing integral altruism as broader than EA, but if I understand you correctly, it’s also narrower in many ways. Some examples:
Letting go of the need to control everything and transcending the frame that we are in conflict with the natural unfolding of the universe. This also means emphasising collective action over individual heroism.
–> Effective altruism doesn’t take a position on whether we are in conflict with the natural unfolding of the universe. EAs emphasise collective actions vs. individual heroism to various degrees.
take radical uncertainty seriously
–> EAs already do this to various degrees. If integral altruists take this really seriously, they are a subset of EAs in this regard.
altruism grounded in truth rather than being driven by guilt or pride
–> EA doesn’t say where your altruistic motivation should be grounded in. All of the reasons you list are considered viable (although people of course disagree to what degree they are conducive/to be encouraged).
Some of the things you describe (especially the ‘different ways of knowing’) seem to sit more outside of what is common within EA. There it seems more like integral altruism actually is broader.
Overall I’m not completely sure whether integral altruism is a way of doing effective altruism differently, or a competing (though often overlapping) world view.
Good points, thank you!
They have incredibly short AGI timelines, so per their own views, they can’t afford to move slowly. If they are giving less than 5% of assets after they already claim AGI, that’s a huge failure.
Do we know whether this is true for the OAF board?[1] Sam Altman is on it, and he definitely believes something along these lines but it’s less clear for the others. Here’s a ChatGPT and a Claude answer on this, which points towards the others being less bullish & concerned (but also a lack of information about what they believe). I expect there to be a range of views on timelines & transformativeness of AGI among the board members – which probably makes it more likely that their spending targets are compatible with the foundations mission.
Bret Taylor (Chair), Adam D’Angelo, Dr. Sue Desmond-Hellmann, Dr. Zico Kolter, Retired U.S. Army General Paul M. Nakasone, Adebayo Ogunlesi, Nicole Seligman, Sam Altman
It looks much nicer than the original imo. If I didn’t have context, I’d probably be confused though.
Why 80,000 hours? And what is the pie chart / watch face analogy about? On first glance I’m not sure whether it’s about career choice, time management, life balance, or some ‘5pm’ metaphor.
I looked at it in this order: (1) “80,000 hours”, (2) pie chart / watch face, trying to figure it out, (3) subtitle, (4) endorsement. But the subtitle and endorsement are doing most of the work of telling me what the book is actually about and whether it’s for me.
Maybe some of this is intended, to make people pick up the book and try to find answers. :)
I agree it would be bad if the OpenAI Foundation were still giving under 5% per year several years from now. But I don’t think ‘they should spend 5%+ in year one’ follows.
Directing billions well is really hard, especially for a new foundation. Coefficient Giving says it directed over $4 billion from 2014 to mid-2025, and that 2025 was the first year it directed more than $1 billion. Their ‘endowment’ is much smaller (~10x smaller?) than OAF’s but it still points towards allocating money well at that scale being genuinely hard. I wouldn’t call a new foundation planning to deploy $1 billion in its first year “conservative”.
What I’d most like to see is OAF committing to an aggressive, public ramp-up targets, maybe something like reaching 5% of assets by 2028.
No, sorry. The diamond emoji (🔸) is specifically for people who donate 10% of their earnings.
But taking a 50% pay cut for altruistic reasons is incredibly based, so you should use the square emoji instead (🟧). It’s also larger, which seems fitting.
Thanks, that’s useful. I mostly agree with you, and mistakenly read the second bullet point as saying “work that opposes fascism should come from all sides of the political spectrum”, which is something I agree with. I think the OP somewhat assumed that opposing fascism will look like ‘work with your local anti-fascist network’, but I expect much of it could look more like ‘militarising Europe’ (something the political left would typically oppose).
I’m curious to understand better where people disagree with this comment.
I don’t think this quite works as a response to Alene’s point. Many things are necessary/valuable preconditions for doing good. We need food, water, functioning infrastructure, preserving democracy, the internet, etc. The fact that something is a precondition for other work doesn’t by itself make it a high-priority EA cause area.
If I apply the ITN framework to ‘preserving democracy’, I get something like:
Importance: Not losing democracy is very important. But losing it was arguably similarly catastrophic e.g. 10 years ago. The question is how much the probability has actually increased. Even though the probability seems larger right now, I expect it to still be relatively small – but I’m uncertain.
Neglectedness: Very low. I agree with Alene’s core point that it’s one of the least neglected causes right now.
Tractability: I’d argue somewhat low, though I’m highly uncertain. There’s little reason to believe there’s lots of low-hanging fruit that hasn’t been picked over decades and centuries of interest in making democracies stable.
It’s also worth noting that much of the current concern is specifically about US democracy, which matters a lot (largest economy, major influence on the rest of the world, where AI is mostly going to be built), and tractability is currently plausibly higher but that’s a narrower cause than ‘preserving democracy’ (e.g. by reducing global democratic backsliding) full stop.
Thanks for this post – really would have liked having such a filter in the past.
We estimate that The Vegan Filter could cut the convenience barrier roughly in half by addressing the “supermarket barrier,” one of the largest friction points for new vegans.
Can you say more about why you estimate this to half the convenience barrier?
I expect this to be much lower, maybe cutting the inconvenience of being vegan by 1-5%. The filter could still be worth the effort, of course :)
which I don’t think Veganuary is.
Seems true. Looking at google trends, ‘veganuary’ is a lot less searched for than ‘movember’.
And I’d suspect that ‘movember’ isn’t all that well-known either. For example, comparing it to black history month.
I haven’t done philosophy in a while, might be missing something, but wanted to highlight what I think is the strongest objections to the view[1] in a way that may be more salient than the framing in section 6. It’s probably a reason why many might prefer a total view.
To be clear, I do think the Saturation View improves on other non-total views I know of, and I appreciate that they flag some of its hard-to-stomach implications. But I still think the post understates how bad the separability issue is. So here are two short points:
Non-separability is really bad.
The core problem is that facts about/experiences of wholly unaffected people can change the value of the affected person’s experiences. If there are already sufficiently many people elsewhere with sufficiently similar experiences, then an additional person having an extremely deep, meaningful, happy life adds near-zero marginal value. That seems very hard to accept.
And for negative experiences the implication is potentially even less intuitive. An additional torturous experience can add almost no marginal disvalue if enough sufficiently similar torture already exists. They discuss this under the “cheap suffering” problem & call it the strongest argument against the view, but I think it is worth emphasizing just how unintuitive of a conclusion this is. From the victim’s perspective, the torture is not any less bad because other similar torture already occurred. But the saturation view says that, from the point of view of population value, their torturous experience would matter hardyl at all.
ETA: Relatedly, the view assigns value to our experiences depending on empirically inaccessible facts. Whether sufficiently distant aliens have sufficiently similar experiences is something we probably can’t know, but it would radically change how our actions matter. That seems strange.
I don’t think the ‘tameness’ of the view recovers that much?
My understanding is that the Saturation View does better because violations of separability are localized. Ancient Egyptians or distant aliens only affect the marginal value of new lives if their experiences are sufficiently similar. So in many “normal situations”, the view behaves roughly separably.
But the separability worry still holds with sufficiently large numbers. If enough sufficiently similar unaffected lives exist elsewhere, they can radically change the marginal value of what we do here.
And population ethics is full of large-number objections. The Repugnant Conclusion itself gets its core intuitive force from considering sufficiently enormous populations, and is also not a “normal situation.” So if the Saturation View is partly motivated by avoiding the very bad large-number implications of total views, then its own large-number implications seem fair game too.
the authors agree with this, afaict