Marginally More Effective Altruism

There’s a huge amount of energy spent on how to get the most QALYs/​$. And a good amount of energy spent on how to increase total $. And you might think that across those efforts, we are succeeding in maximizing total QALYs.

I think a third avenue is under investigated: marginally improving the effectiveness of ineffective capital. That’s to say, improving outcomes, only somewhat, for the pool of money that is not at all EA-aligned.

This cash is not being spent optimally, and likely never will be. But the sheer volume could make up for the lack of efficacy.

Say you have the option to work for the foundation of one of two donors:

  • Donor A only has an annual giving budget of $100,000, but will do with that money whatever you suggest. If you say “bed nets” he says “how many”.

  • Donor B has a much larger budget of $100,000,000, but has much stronger views, and in fact, only wants to donate domestically to the US.

Donor B feels overlooked to me, despite the fact that even within the US, even without access to any of the truly most effective charities, there are still lots of opportunities to do marginally better.

In practice, I note a conspicuous lack of EAs working for Donor B-like characters. There does not seem to be any kind of concerted effort influence ineffective foundations.[1]

Most money is not EA money

I’ve often heard of the Seeing Eye Dog argument for the overwhelming importance of EA:

One eye-seeing dog charity claims it costs ‘$42,000 or more’ to train the dog and provide instructions to the user. A cataract charity claims to be able to perform the procedure for $25.

Less anecdotally, an 80k report highlights the power law in the effectiveness of global health interventions.

The power law of impact is a really strong argument for prioritizing QALYs/​$, even at the cost of overall dollars. If the best interventions are literally 100x or even 1,000x as impactful as the median ones, that is going to be tough to make up for.

On the other hand, there is a really steep power law on the dollars side too! Some people have way more money than others. And most money in the world is not EA money.

Take the Azim Premji Foundation. It has a massive endowment of $21 billion, yet there are no mentions of it on the EA Forum.

APF was founded with an explicit focus on education in India, so their grant making is pretty restricted, and it might feel like getting them to have anything to do with EA would be a huge long shot. But there is a lot of room between “status quo” and “fully optimized”, and this space feels neglected (to EAs), tractable and highly scalable.

Modified from Giving What We Can

As a simplified model, let’s take these numbers literally, and assume that the APF currently operates at a mere 1x. Their geographic and cause restrictions might mean that they’ll never go to 100x, but 10x could be entirely plausible. If the status quo is that $20b of APF giving is like $0.2b of Give Well giving, there’s an opportunity to 10x that impact, and generate net $1.8b in Give Well equivalent impact.

Those units aren’t entirely intuitive and the numbers are largely made up, but they get across the point that moving huge sums of money from Charity C to Charity B matters a huge amount, even if you never get to Charity A.

Going further, there could be room to negotiate or expand the charity’s entire charter. It would perhaps not be impossible to argue that Vitamin A supplementation should count as an education intervention since it can prevent blindness. And in fact, APF has already begun to branch out into health, and specifically on the nutritional status of young children. Nudging them in this direction even a year earlier, or doing the work they now want to do in a marginally more effective way (even without say, redirecting the funds to Niger), could be hugely impactful.

How much money is there?

Give Well already does hundreds of millions per year, and in some sense that’s a lot, but it pales in comparison to the total pool of capital that feels, in principle, “available”.

As of 2025, there were 3000 billionaires representing a total net worth of $1.6T. That is a lot of money! They might not give it all to charities, but they will have to do something with it, and we should work hard to make sure that something is as good as it reasonably can be.

There are already lots of very motivated EAs trying to to direct Dustin Moskovitz’s relatively modest $12b. There seem to be way fewer trying hard to direct even 1% of the budget of the other billionaires. There is only one mention even of Amancio Ortega on EA Forum, even though he is the 9th richest person with a net worth of $124b. And barely any mention of Bernard Arnault or Larry Ellison or Steve Ballmer.

These four alone represent over $600b in wealth that could at least be spent on marginally better causes. And in fact, they’ve already collectively spent billions on philanthropic causes. Marginally improving even a small portion of these donations could be huge.

Effective Everything?

EA-relevant organizations do around $1b/​year in giving, but the world collectively does around $1t. There’s a tremendous neglected opportunity to improve the effectiveness of the world’s charities, even without making it all the way to truly optimal causes.

It feels, in some sense, against EA principles, but there should be lists of “the most effective charities, given that you can only give to X country” or “given that you’re only interested in X cause”. Beyond analysis, it feels like there is a huge amount of room for direct engagement, and trying to work at some the world’s largest non-EA foundations.

DAFgiving360™ does not sound like an EA nonprofit. It works with Charles Schwab & Co., which is not a very EA organization. And yet they have done $44 billion in recommendations, will do a lot more in the future, and are hiring a senior manager for charitable consulting. This kind of job does not currenly end up on the 80,000 Hours Job Board, but I believe it really ought to. And we ought to think harder about “marginal reform of legacy non-EA institutions” as an important skill set.

  1. ^

    An obvious explanation for the lack of visibility would be that these people don’t want to identify as EAs, because it would alienate “normie” donors. This is possible, but I’m still suspicious that I’ve literally never heard of anyone in EA taking this path to impact.