Thanks for red teaming – it seems like lots of people are having similar thoughts, so it’s useful to have them all in one place.
First off, I agree with this:
I think there are better uses of your time than earning-to-give. Specifically, you ought to do more entrepreneurial, risky, and hyper-ambitious direct work, while simultaneously considering weirder and more speculative small donations.
I say this in the introduction (and my EA Global talk). The point I’m trying to get across is that earning to give to top EA causes is still perhaps (to use made-up numbers) in the 98th percentile of impactful things you might do; while these things might be, say, 99.5-99.9th percentile. I agree my post might not have made this sufficiently salient. It’s really hard to correct one misperception without accidentally encouraging one in the opposite direction.
The arguments in your post seem to imply that additional funding has near zero value. My prior is that more money means more impact, but at a diminishing rate.
Before going into your specific points, I’ll try to describe an overall model of what happens when more funds come into the community, which will explain why more money means more but diminishing impact.
Very roughly, EA donors try to fund everything above a ‘bar’ of cost-effectiveness (i.e. value per dollar). Most donors (especially large ones) are reasonably committed to giving away a certain portion of their funds unless cost-effectiveness drops very low, which means that the bar is basically set by how impactful they expect the ‘final dollar’ they give away in the future to be. This means that if more money shows up, they reduce the bar in the long run (though capacity constraints may make this take a while). Additional funding is still impactful, but because the bar has been dropped, each dollar generates a little less value than before.
Here’s a bit more detail of a toy model. I’ll focus on the longtermist case since I think it’s harder to see what’s going on there.
Suppose longtermist donors have $10bn. Their aim might be to buy as much existential risk reduction over the coming decades as possible with that $10bn, for instance, to get as much progress as possible on the AI alignment problem.
Donations to things like the AI alignment problem has diminishing returns – it’s probably roughly logarithmic. Maybe the first $1bn has a cost-effectiveness of 1000:1. This means that it generates 1000 units of value (e.g. utils, x-risk reduction) per $1 invested. The next $10bn returns 100:1, the next $100bn returns 10:1, the next $1,000bn is 2:1, and additional funding after that isn’t cost-effective. (In reality, it’s a smoothly declining curve.)
If longtermist donors currently have $10bn (say), then they can fund the entire first $1bn and $9bn of the next tranche. This means their current funding bar is 100:1 – so they should aim to take any opportunities above this level.
Now suppose some smaller donors show up with $1m between them. Now in total there is $10.001bn available for longtermist causes. The additional $1m goes into the 100:1 tranche, and so has a cost-effectiveness of 100:1. This is a bit lower than the average cost-effectiveness of the first $10bn (which was 190:1), but is the same as marginal donations by the original donors and still very cost-effective.
Now instead suppose another mega-donor shows up with $10bn, so the donors have $20bn in total. They’re able to spend $1bn at 1000:1, then $10bn at 100:1 and then the remaining $9bn is spent on the 10:1 tranche. The additional $10bn had a cost-effectiveness of 19:1 on average. This is lower than the 190:1 of the first $10bn, but also still worth doing.
How does this play out over time?
Suppose you have $10bn to give, and want to donate it over 10 years.
If we assume hinginess isn’t changing & ignore investment returns, then the simplest model is that you’ll want to donate about $1bn per year for 10 years.
The idea is that if the rate of good opportunities is roughly constant, and you’re trying to hit a particular bar of cost-effectiveness, then you’ll want to spread out your giving. (In reality you’ll give more in years where you find unusually good things, and vice versa.)
Now suppose a group of small donors show up who have $1bn between them. Then the ideal is that the community donates $1.1bn per year for 10 years – which requires dropping their bar (but only a little).
One way this could happen is for the small donors to give $100m per year for 10 years (‘topping up’). Another option is for the small donors to give $1bn in year 1 – then the correct strategy for the megadonor is to only give $100m in year 1 and give $1.1bn per year for the remaining 9 (‘partial funging’).
A big complication is that the set of opportunities isn’t fixed – we can discover new opportunities through research or create them via entrepreneurship. (This is what I mean by ‘grantmaking capacity and research’.)
It takes a long time to scale up a foundation, and longtermism as a whole is still tiny. This means there’s a lot of scope to find or create better opportunities. So donors will probably want to give less at the start of the ten years, and more towards the end when these opportunities have been found (and earning investment returns in the meantime).
Now I can use this model to respond to some of your specific points:
At face value, CEPI seems great. But at the meta-level, I still have to ask, if CEPI is a good use of funds, why doesn’t OpenPhil just fund it?
Open Phil doesn’t fund it because they think they can find opportunities that are 10-100x more cost-effective in the coming years.
This doesn’t, however, mean donating to CEPI has no value. I think CEPI could make a meaningful contribution to biosecurity (and given my personal cause selection, likely similarly or more effective than donating to GiveWell-recommended charities).
An opportunity can be below Open Phil’s current funding bar if Open Phil expects to find even better opportunities in the future (as more opportunities come along each year, and as they scale up their grantmaking capacity), but that doesn’t mean it wouldn’t be ‘worth funding’ if we had even more money.
My point isn’t that people should donate to CEPI, and I haven’t thoroughly investigated it myself. It’s just meant as an illustration of how there are many more opportunities at lower levels of cost-effectiveness. I actually think both small donors and Open Phil can have an impact greater than funding CEPI right now.
(Of course, Open Phil could be wrong. Maybe they won’t discover better opportunities, or EA funding will grow faster than they expect, and their bar today should be lower. In this case, it will have been a mistake not to donate to CEPI now.)
In general, my default view for any EA cause is always going to be:
If this isn’t funded by OpenPhil, why should I think it’s a good idea?
If this is funded by OpenPhil, why should I contribute more money?
It’s true that it’s not easy to beat Open Phil in terms of effectiveness, but this line of reasoning seems to imply that Open Phil is able to drive cost-effectiveness to negligible levels in all causes of interest. Actually Open Phil is able to fund everything above a certain bar, and additional small donations have a cost-effectiveness similar to that bar.
In the extreme version of this view, a donation to AMF doesn’t really buy more bednets, it’s essentially a donation to GiveWell, or even a donation to Dustin Moskovitz.
You’re right that donations to AMF probably doesn’t buy more bednets, since AMF is not the marginal opportunity any more (I think, not sure about that). Rather, additional donations to global health get added to the margin of GiveWell donations over the long term, which Open Phil and GiveWell estimate has a cost-effectiveness of about 7x GiveDirectly / saving the life of a child under 5 for $4,500.
You’re also right that as additional funding comes in, the bar goes down, and that might induce some donors to stop giving all together (e.g. maybe people are willing to donate above a certain level of cost-effectiveness, but not below.
However, I think we’re a long way from that point. I expect Dustin Moskovitz would still donate almost all his money at GiveDirectly-levels of cost-effectiveness, and even just within global health, we’re able to hit levels at least 5x greater than that right now.
Raising everyone in the world above the extreme poverty line would cost perhaps $100bn per year (footnote 8 here), so we’re a long way from filling everything at a GiveDirectly level of cost-effectiveness – we’d need about 50x as much capital as now to do that, and that’s ignoring other cause areas.
There seem to be a few reasonable views:
1. OpenPhil will fund the most impactful things up to $Y/year.
2. OpenPhil will fund anything with an expected cost-effectiveness of above X QALYs/$.
3. OpenPhil tries to fund every highly impactful cause it has the time to evaluate.
I think view (2) is closest, but this part is incorrect:
What about the second view? In that case, you’re not freeing up any money since OpenPhil just stops donating once it’s filled the available capacity.
What actually happens is that as more funding comes in, Open Phil (& other donors) slightly reduces its bar, so that the total donated is higher, and cost-effectiveness a little lower. (Which might take several years.)
Why doesn’t Open Phil drop its bar already, especially given that they’re only spending ~1% of available capital per year? Ideally they’d be spending perhaps more like 5% of available capital per year. The reason this isn’t higher already is because growth in grantmaking capacity, research and the community will make it possible to find even more effective opportunities in the future. I expect Open Phil will scale up its grantmaking several fold over the coming decade. It looks like this is already happening within neartermism.
One way to steelman your critique, would be to push on talent vs. funding constraints. Labour and capital are complementary, but it’s plausible the community has more capital relative to labour than would be ideal, making additional capital less valuable. If the ratio became sufficiently extreme, additional capital would start to have relatively little value. However, I think we could actually deploy billions more without any additional people and still achieve reasonable cost-effectiveness. It’s just that I think that if we had more labour (especially the types of labour that are most complementary with funding), the cost-effectiveness would be even higher.
Finally, on practical recommendations, I agree with you that small donors have the potential to make donations even more effective than Open Phil’s current funding bar by pursuing strategies similar to those you suggest (that’s what my section 3 covers – though I don’t agree that grants with PR issues is a key category). But simply joining Open Phil in funding important issues like AI safety and global health still does a lot of good.
In short, world GDP is $80 trillion. The interest on EA funds is perhaps $2.5bn per year, so that’s the sustainable amount of EA spending per year. This is about 0.003% of GDP. It would be surprising if that were enough to do all the effective things to help others.
One way to steelman your critique, would be to push on talent vs. funding constraints. Labour and capital are complementary, but it’s plausible the community has more capital relative to labour than would be ideal, making additional capital less valuable
I’m not sure about this, but I currently believe that the human capital in EA is worth considerably more than the financial capital.
It’s hard to know – most valuations of the human capital are bound up with the available financial capital. One way to frame the question is to consider how much the community could earn if everyone tried to earn to give. I agree it’s plausible that would be higher than the current income on the capital, but I think could also be a lot less.
It’s hard to know – most valuations of the human capital are bound up with the available financial capital.
Agreed. Though I think I believe this much less now than I used to. To be more specific, I used to believe that the primary reason direct work is valuable is because we have a lot of money to donate, so cause or intervention prioritization is incredibly valuable because of the leveraged gains. But I no longer think that’s the but-for factor, and as a related update think there are many options at similar levels of compellingness as prioritization work.
One way to frame the question is to consider how much the community could earn if everyone tried to earn to give
I like and agree with this operationalization. Though I’d maybe say “if everybody tried to earn to give or fundraise” instead.
I agree it’s plausible that would be higher than the current income on the capital, but I think could also be a lot less.
I agree it could also be a lot less, but I feel like that’s the more surprising outcome? Some loose thoughts in this direction:
Are we even trying? Most of our best and brightest aren’t trying to make lots of money. Like I’d be surprised if among the 500 EAs most capable of making lots of money, even 30% are trying to make lots of money.
And honestly it feels less, more like 15-20%?
Maybe you think SBF is unusually good at making money, more than the remaining 400-425 or so EAs combined?
This at least seems a little plausible to me, but not overwhelmingly so.
I feel even more strongly about this for fundraising. We have HNW fundraisers, but people are very much not going full steam on this
Of the 500 EAs with the strongest absolute advantage for fundraising from non-EA donors, I doubt even 25 of them are working full-time on this.
Retrodiction issues. Believing that we had more capital than human capital at any point in EAs past would have been a mistake, and I don’t see why now is different.
We had considerably less than ~50B in your post a few years ago, and most of the gains appears to be in revenue, not capital appreciation
(H/T AGB) Age curves and wealth. If the income/wealth-over-time of EAs look anything like that of normal people (including normal 1%-ers), highest earnings would be in ages >40, highest wealth in ages >60. Our movement’s members have a median age of 27 and a mean age of 30. We are still gaining new members, and most of our new recruits are younger than our current median. So why think we’re over the middle point in lifetime earnings or donations?
Maybe you think crypto + early FB is a once-in-a-lifetime thing, and that is strong enough to explain the lifetime wealth effect?
I don’t believe that. I think of crypto as a once-in-a-decade thing.
Maybe your AI timelines are short enough that once-in-a-decade is the equivalent to a once-in-a-lifetime belief for you?
If so, I find this at least plausible, but I think this conjunction is a pretty unusual belief, whether in EA or the world at large, so it needs a bit more justification.
I’m not sure I even buy that SBF specifically is past >50% of his earning potential, and would tentatively bet against.
Macabre thought-experiment: if an evil genie forced you to choose between a) all EAs except one (say a good grantmaker like Holden or Nick Beckstead) die painlessly with their inheritance sent to the grantmaker vs b) all of our wealth magically evaporate, which would you choose?
For me it’d be b), and not even close
another factor is that ~half of our wealth is directly tied to specific people in EA. If SBF + cofounders disappeared, FTX’s valuation will plummet.
But I don’t think that’s the relevant comparison for the ‘ETG versus direct work’ question. If we have a lot of human capital that also means we could earn and give more through ETG.
The more relevant comparison is something like
is the typical EA’s human capital more valuable in doing direct EA work than it could be in non-EA work? If not, he/she should ETG … and her donation could hire (?non-EAs) to fulfill the talent gap
If the financial capital is $46B and the population is 10k, the average person’s career capital is worth about ~$5M of direct impact (as opposed to the money they’ll donate)? I have a wide confidence interval but that seems reasonable. I’m curious to see how many people currently going into EA jobs will still be working on them 30 years later.
I want to ‘second’ some key points you made (which I was going to make myself). The main theme is that these ‘absolute’ thresholds are not absolute; these are simplified expressions of the true optimization problem.
The real thresholds will be adjusted in light of available funding, opportunities, and beliefsabout future funding.
See comments (mine and others) on the misconception of ‘room for more funding’… the “RFMF” idea must be, either an approximate relative judgment (‘past this funding, we think other opportunities may be better’) or short-term capacity constraint (‘we only have staff/permits/supplies to administer 100k vaccines per year, so we’d need to do more hiring and sourcing to go above this’.)
Diminishing returns … but not to zero
The arguments in your post seem to imply that additional funding has near zero value. My prior is that more money means more impact, but at a diminishing rate.
It’s true that it’s not easy to beat Open Phil in terms of effectiveness, but this line of reasoning seems to imply that Open Phil is able to drive cost-effectiveness to negligible levels in all causes of interest. Actually Open Phil is able to fund everything above a certain bar, and additional small donations have a cost-effectiveness similar to that bar.
The bar moves
What actually happens is that as more funding comes in, Open Phil (& other donors) slightly reduces its bar, so that the total donated is higher, and cost-effectiveness a little lower. (Which might take several years.)
> At face value, [an EA organization] seems great. But at the meta-level, I still have to ask, if [organization] is a good use of funds, why doesn’t OpenPhil just fund it?
Open Phil doesn’t fund it because they think they can find opportunities that are 10-100x more cost-effective in the coming years.
This is highly implausible. First of all, if it’s true, it implies that instead of funding things, they should just do fundraising and sit around on their piles of cash until they can discover these opportunities.
But it also implies they have (in my opinion, excessively) high confidence all that the hinge of history and astronomical waste arguments are wrong, and that transformative AI is farther away than most forecasters believe. If someone is going to invent AGI in 2060, we’re really limited in the amount of time available to alter the probabilities that it goes well vs badly for humanity.
When you’re working on global poverty, perhaps you’d want to hold off on donations if your investments are growing by 7% per year while GDP of the poorest countries is only growing by 2%, because you could have something like 5% more impact by giving 107 bednets next year instead of 100 bednets today.
For x-risks this seems totally implausible. What’s the justification for waiting? AGI alignment does not become 10x more tractable over the span of a few years. Private sector AI R&D has been growing by 27% per year since 2015, and I really don’t think alignment progress has outpaced that. If time until AGI is limited and short then we’re actively falling behind. I don’t think their investments or effectiveness are increasing fast enough for this explanation to make sense.
I think the party line is that the well-vetted (and good) places in AI Safety aren’t funding-constrained, and the non-well-vetted places in AI Safety might do more harm than good, so we’re waiting for places to build enough capacity to absorb more funding.
Under that worldview, I feel much more bullish about funding constraints for longtermist work outside of AI Safety, as well as more meta work that can feed into AI Safety later.
Within AI Safety, if we want to give lots of money quickly, I’d think about:
funding individuals who seem promising and are somewhat funding constrained
eg, very smart students in developing countries, or Europe, who want to go into AI Safety.
also maybe promising American undergrads from poorer backgrounds
The special case here is yourself if you want to go into AI Safety, and want to invest $s in your own career capital
Figure out which academic labs differentially improve safety over capabilities, throw GPUs or research engineers or teaching time buyouts for their grad students
When I talked to an AI safety grad student about this, he said that Top 4 CS programs are not funding constrained, but top 10-20 are somewhat.
We’re mostly bottlenecked on strategic clarity here, different AI Safety people I talk to have pretty different ideas about which research differentially advance safety over capabilities.
Possibly just throw lots of money at “aligned enough” academic places like CHAI, or individual AI-safety focused professors.
Unlike the above, here the focus is more on alignment rather than strategic understanding that what people are doing is good, just hoping that apparent alignment + trusting other EAs is “good enough” to be net positive.
Other than #1 (which grantmakers are bottlenecked somewhat on due to their lack of local knowledge/networks), none of these things seem like “clear wins” in the sense of shovel ready projects that can absorb lots of money and we’re pretty confident is good.
When I talked to an AI safety grad student about this, he said that Top 4 CS programs are not funding constrained, but top 10-20 are somewhat.
I’ve never been a grad student, but I suspect that CS grad students are constrained in ways that EA donors could fairly easily fix. They might not be grant-funding-constrained, but they’re probably make-enough-to-feel-financially-secure-constrained or grantwriting-time-constrained, and you could convert AI grad students into AI safety grad students by lifting these constraints for them.
Sorry, I didn’t mean to imply that biorisk does or doesn’t have “fast timelines” in the same sense as some AI forecasts. I was responding to the point about “if [EA organization] is a good use of funds, why doesn’t OpenPhil fund it?” being answered with the proposition that OpenPhil is not funding much stuff in the present (disbursing 1% of their assets per year, a really small rate even if you are highly patient) because they think they will find better things to fund in the future. That seems like a wrong explanation.
The point I’m trying to get across is that earning to give to top EA causes is still perhaps (to use made-up numbers) in the 98th percentile of impactful things you might do; while these things might be, say, 99.5-99.9th percentile.
I think this is a very useful way of putting it. I would be interested in anyone trying to actually quantify this (even to just get the right order of magnitude from the top). I suspect you have already done something in this direction when you decide what jobs to list on your job board.
Thanks for red teaming – it seems like lots of people are having similar thoughts, so it’s useful to have them all in one place.
First off, I agree with this:
I say this in the introduction (and my EA Global talk). The point I’m trying to get across is that earning to give to top EA causes is still perhaps (to use made-up numbers) in the 98th percentile of impactful things you might do; while these things might be, say, 99.5-99.9th percentile. I agree my post might not have made this sufficiently salient. It’s really hard to correct one misperception without accidentally encouraging one in the opposite direction.
The arguments in your post seem to imply that additional funding has near zero value. My prior is that more money means more impact, but at a diminishing rate.
Before going into your specific points, I’ll try to describe an overall model of what happens when more funds come into the community, which will explain why more money means more but diminishing impact.
Very roughly, EA donors try to fund everything above a ‘bar’ of cost-effectiveness (i.e. value per dollar). Most donors (especially large ones) are reasonably committed to giving away a certain portion of their funds unless cost-effectiveness drops very low, which means that the bar is basically set by how impactful they expect the ‘final dollar’ they give away in the future to be. This means that if more money shows up, they reduce the bar in the long run (though capacity constraints may make this take a while). Additional funding is still impactful, but because the bar has been dropped, each dollar generates a little less value than before.
Here’s a bit more detail of a toy model. I’ll focus on the longtermist case since I think it’s harder to see what’s going on there.
Suppose longtermist donors have $10bn. Their aim might be to buy as much existential risk reduction over the coming decades as possible with that $10bn, for instance, to get as much progress as possible on the AI alignment problem.
Donations to things like the AI alignment problem has diminishing returns – it’s probably roughly logarithmic. Maybe the first $1bn has a cost-effectiveness of 1000:1. This means that it generates 1000 units of value (e.g. utils, x-risk reduction) per $1 invested. The next $10bn returns 100:1, the next $100bn returns 10:1, the next $1,000bn is 2:1, and additional funding after that isn’t cost-effective. (In reality, it’s a smoothly declining curve.)
If longtermist donors currently have $10bn (say), then they can fund the entire first $1bn and $9bn of the next tranche. This means their current funding bar is 100:1 – so they should aim to take any opportunities above this level.
Now suppose some smaller donors show up with $1m between them. Now in total there is $10.001bn available for longtermist causes. The additional $1m goes into the 100:1 tranche, and so has a cost-effectiveness of 100:1. This is a bit lower than the average cost-effectiveness of the first $10bn (which was 190:1), but is the same as marginal donations by the original donors and still very cost-effective.
Now instead suppose another mega-donor shows up with $10bn, so the donors have $20bn in total. They’re able to spend $1bn at 1000:1, then $10bn at 100:1 and then the remaining $9bn is spent on the 10:1 tranche. The additional $10bn had a cost-effectiveness of 19:1 on average. This is lower than the 190:1 of the first $10bn, but also still worth doing.
How does this play out over time?
Suppose you have $10bn to give, and want to donate it over 10 years.
If we assume hinginess isn’t changing & ignore investment returns, then the simplest model is that you’ll want to donate about $1bn per year for 10 years.
The idea is that if the rate of good opportunities is roughly constant, and you’re trying to hit a particular bar of cost-effectiveness, then you’ll want to spread out your giving. (In reality you’ll give more in years where you find unusually good things, and vice versa.)
Now suppose a group of small donors show up who have $1bn between them. Then the ideal is that the community donates $1.1bn per year for 10 years – which requires dropping their bar (but only a little).
One way this could happen is for the small donors to give $100m per year for 10 years (‘topping up’). Another option is for the small donors to give $1bn in year 1 – then the correct strategy for the megadonor is to only give $100m in year 1 and give $1.1bn per year for the remaining 9 (‘partial funging’).
A big complication is that the set of opportunities isn’t fixed – we can discover new opportunities through research or create them via entrepreneurship. (This is what I mean by ‘grantmaking capacity and research’.)
It takes a long time to scale up a foundation, and longtermism as a whole is still tiny. This means there’s a lot of scope to find or create better opportunities. So donors will probably want to give less at the start of the ten years, and more towards the end when these opportunities have been found (and earning investment returns in the meantime).
Now I can use this model to respond to some of your specific points:
Open Phil doesn’t fund it because they think they can find opportunities that are 10-100x more cost-effective in the coming years.
This doesn’t, however, mean donating to CEPI has no value. I think CEPI could make a meaningful contribution to biosecurity (and given my personal cause selection, likely similarly or more effective than donating to GiveWell-recommended charities).
An opportunity can be below Open Phil’s current funding bar if Open Phil expects to find even better opportunities in the future (as more opportunities come along each year, and as they scale up their grantmaking capacity), but that doesn’t mean it wouldn’t be ‘worth funding’ if we had even more money.
My point isn’t that people should donate to CEPI, and I haven’t thoroughly investigated it myself. It’s just meant as an illustration of how there are many more opportunities at lower levels of cost-effectiveness. I actually think both small donors and Open Phil can have an impact greater than funding CEPI right now.
(Of course, Open Phil could be wrong. Maybe they won’t discover better opportunities, or EA funding will grow faster than they expect, and their bar today should be lower. In this case, it will have been a mistake not to donate to CEPI now.)
It’s true that it’s not easy to beat Open Phil in terms of effectiveness, but this line of reasoning seems to imply that Open Phil is able to drive cost-effectiveness to negligible levels in all causes of interest. Actually Open Phil is able to fund everything above a certain bar, and additional small donations have a cost-effectiveness similar to that bar.
You’re right that donations to AMF probably doesn’t buy more bednets, since AMF is not the marginal opportunity any more (I think, not sure about that). Rather, additional donations to global health get added to the margin of GiveWell donations over the long term, which Open Phil and GiveWell estimate has a cost-effectiveness of about 7x GiveDirectly / saving the life of a child under 5 for $4,500.
You’re also right that as additional funding comes in, the bar goes down, and that might induce some donors to stop giving all together (e.g. maybe people are willing to donate above a certain level of cost-effectiveness, but not below.
However, I think we’re a long way from that point. I expect Dustin Moskovitz would still donate almost all his money at GiveDirectly-levels of cost-effectiveness, and even just within global health, we’re able to hit levels at least 5x greater than that right now.
Raising everyone in the world above the extreme poverty line would cost perhaps $100bn per year (footnote 8 here), so we’re a long way from filling everything at a GiveDirectly level of cost-effectiveness – we’d need about 50x as much capital as now to do that, and that’s ignoring other cause areas.
I think view (2) is closest, but this part is incorrect:
What actually happens is that as more funding comes in, Open Phil (& other donors) slightly reduces its bar, so that the total donated is higher, and cost-effectiveness a little lower. (Which might take several years.)
Why doesn’t Open Phil drop its bar already, especially given that they’re only spending ~1% of available capital per year? Ideally they’d be spending perhaps more like 5% of available capital per year. The reason this isn’t higher already is because growth in grantmaking capacity, research and the community will make it possible to find even more effective opportunities in the future. I expect Open Phil will scale up its grantmaking several fold over the coming decade. It looks like this is already happening within neartermism.
One way to steelman your critique, would be to push on talent vs. funding constraints. Labour and capital are complementary, but it’s plausible the community has more capital relative to labour than would be ideal, making additional capital less valuable. If the ratio became sufficiently extreme, additional capital would start to have relatively little value. However, I think we could actually deploy billions more without any additional people and still achieve reasonable cost-effectiveness. It’s just that I think that if we had more labour (especially the types of labour that are most complementary with funding), the cost-effectiveness would be even higher.
Finally, on practical recommendations, I agree with you that small donors have the potential to make donations even more effective than Open Phil’s current funding bar by pursuing strategies similar to those you suggest (that’s what my section 3 covers – though I don’t agree that grants with PR issues is a key category). But simply joining Open Phil in funding important issues like AI safety and global health still does a lot of good.
In short, world GDP is $80 trillion. The interest on EA funds is perhaps $2.5bn per year, so that’s the sustainable amount of EA spending per year. This is about 0.003% of GDP. It would be surprising if that were enough to do all the effective things to help others.
I’m not sure about this, but I currently believe that the human capital in EA is worth considerably more than the financial capital.
It’s hard to know – most valuations of the human capital are bound up with the available financial capital. One way to frame the question is to consider how much the community could earn if everyone tried to earn to give. I agree it’s plausible that would be higher than the current income on the capital, but I think could also be a lot less.
Agreed. Though I think I believe this much less now than I used to. To be more specific, I used to believe that the primary reason direct work is valuable is because we have a lot of money to donate, so cause or intervention prioritization is incredibly valuable because of the leveraged gains. But I no longer think that’s the but-for factor, and as a related update think there are many options at similar levels of compellingness as prioritization work.
I like and agree with this operationalization. Though I’d maybe say “if everybody tried to earn to give or fundraise” instead.
I agree it could also be a lot less, but I feel like that’s the more surprising outcome? Some loose thoughts in this direction:
Are we even trying? Most of our best and brightest aren’t trying to make lots of money. Like I’d be surprised if among the 500 EAs most capable of making lots of money, even 30% are trying to make lots of money.
And honestly it feels less, more like 15-20%?
Maybe you think SBF is unusually good at making money, more than the remaining 400-425 or so EAs combined?
This at least seems a little plausible to me, but not overwhelmingly so.
I feel even more strongly about this for fundraising. We have HNW fundraisers, but people are very much not going full steam on this
Of the 500 EAs with the strongest absolute advantage for fundraising from non-EA donors, I doubt even 25 of them are working full-time on this.
Retrodiction issues. Believing that we had more capital than human capital at any point in EAs past would have been a mistake, and I don’t see why now is different.
We had considerably less than ~50B in your post a few years ago, and most of the gains appears to be in revenue, not capital appreciation
(H/T AGB) Age curves and wealth. If the income/wealth-over-time of EAs look anything like that of normal people (including normal 1%-ers), highest earnings would be in ages >40, highest wealth in ages >60. Our movement’s members have a median age of 27 and a mean age of 30. We are still gaining new members, and most of our new recruits are younger than our current median. So why think we’re over the middle point in lifetime earnings or donations?
Maybe you think crypto + early FB is a once-in-a-lifetime thing, and that is strong enough to explain the lifetime wealth effect?
I don’t believe that. I think of crypto as a once-in-a-decade thing.
Maybe your AI timelines are short enough that once-in-a-decade is the equivalent to a once-in-a-lifetime belief for you?
If so, I find this at least plausible, but I think this conjunction is a pretty unusual belief, whether in EA or the world at large, so it needs a bit more justification.
I’m not sure I even buy that SBF specifically is past >50% of his earning potential, and would tentatively bet against.
Macabre thought-experiment: if an evil genie forced you to choose between a) all EAs except one (say a good grantmaker like Holden or Nick Beckstead) die painlessly with their inheritance sent to the grantmaker vs b) all of our wealth magically evaporate, which would you choose?
For me it’d be b), and not even close
another factor is that ~half of our wealth is directly tied to specific people in EA. If SBF + cofounders disappeared, FTX’s valuation will plummet.
But I don’t think that’s the relevant comparison for the ‘ETG versus direct work’ question. If we have a lot of human capital that also means we could earn and give more through ETG.
The more relevant comparison is something like
If the financial capital is $46B and the population is 10k, the average person’s career capital is worth about ~$5M of direct impact (as opposed to the money they’ll donate)? I have a wide confidence interval but that seems reasonable. I’m curious to see how many people currently going into EA jobs will still be working on them 30 years later.
I want to ‘second’ some key points you made (which I was going to make myself). The main theme is that these ‘absolute’ thresholds are not absolute; these are simplified expressions of the true optimization problem.
The real thresholds will be adjusted in light of available funding, opportunities, and beliefsabout future funding.
See comments (mine and others) on the misconception of ‘room for more funding’… the “RFMF” idea must be, either an approximate relative judgment (‘past this funding, we think other opportunities may be better’) or short-term capacity constraint (‘we only have staff/permits/supplies to administer 100k vaccines per year, so we’d need to do more hiring and sourcing to go above this’.)
Diminishing returns … but not to zero
The bar moves
This is highly implausible. First of all, if it’s true, it implies that instead of funding things, they should just do fundraising and sit around on their piles of cash until they can discover these opportunities.
But it also implies they have (in my opinion, excessively) high confidence all that the hinge of history and astronomical waste arguments are wrong, and that transformative AI is farther away than most forecasters believe. If someone is going to invent AGI in 2060, we’re really limited in the amount of time available to alter the probabilities that it goes well vs badly for humanity.
When you’re working on global poverty, perhaps you’d want to hold off on donations if your investments are growing by 7% per year while GDP of the poorest countries is only growing by 2%, because you could have something like 5% more impact by giving 107 bednets next year instead of 100 bednets today.
For x-risks this seems totally implausible. What’s the justification for waiting? AGI alignment does not become 10x more tractable over the span of a few years. Private sector AI R&D has been growing by 27% per year since 2015, and I really don’t think alignment progress has outpaced that. If time until AGI is limited and short then we’re actively falling behind. I don’t think their investments or effectiveness are increasing fast enough for this explanation to make sense.
I think the party line is that the well-vetted (and good) places in AI Safety aren’t funding-constrained, and the non-well-vetted places in AI Safety might do more harm than good, so we’re waiting for places to build enough capacity to absorb more funding.
Under that worldview, I feel much more bullish about funding constraints for longtermist work outside of AI Safety, as well as more meta work that can feed into AI Safety later.
Within AI Safety, if we want to give lots of money quickly, I’d think about:
funding individuals who seem promising and are somewhat funding constrained
eg, very smart students in developing countries, or Europe, who want to go into AI Safety.
also maybe promising American undergrads from poorer backgrounds
The special case here is yourself if you want to go into AI Safety, and want to invest $s in your own career capital
Figure out which academic labs differentially improve safety over capabilities, throw GPUs or research engineers or teaching time buyouts for their grad students
When I talked to an AI safety grad student about this, he said that Top 4 CS programs are not funding constrained, but top 10-20 are somewhat.
We’re mostly bottlenecked on strategic clarity here, different AI Safety people I talk to have pretty different ideas about which research differentially advance safety over capabilities.
Possibly just throw lots of money at “aligned enough” academic places like CHAI, or individual AI-safety focused professors.
Unlike the above, here the focus is more on alignment rather than strategic understanding that what people are doing is good, just hoping that apparent alignment + trusting other EAs is “good enough” to be net positive.
Seriously consider buying out AI companies, or other bottlenecks to AI progress.
Other than #1 (which grantmakers are bottlenecked somewhat on due to their lack of local knowledge/networks), none of these things seem like “clear wins” in the sense of shovel ready projects that can absorb lots of money and we’re pretty confident is good.
I’ve never been a grad student, but I suspect that CS grad students are constrained in ways that EA donors could fairly easily fix. They might not be grant-funding-constrained, but they’re probably make-enough-to-feel-financially-secure-constrained or grantwriting-time-constrained, and you could convert AI grad students into AI safety grad students by lifting these constraints for them.
This has good content but I am genuinely confused (partly because this article’s subject is complex and this is after several successive replies).
Your point about timelines seems limited to AI risk. I don’t see the connection to the point about CEPI.
Maybe biorisk has similar “fast timelines” as AI risk—is this what your meaning?
I hesitate to assume this is your meaning, so I write this comment instead. I really just want to understand this thread better.
Sorry, I didn’t mean to imply that biorisk does or doesn’t have “fast timelines” in the same sense as some AI forecasts. I was responding to the point about “if [EA organization] is a good use of funds, why doesn’t OpenPhil fund it?” being answered with the proposition that OpenPhil is not funding much stuff in the present (disbursing 1% of their assets per year, a really small rate even if you are highly patient) because they think they will find better things to fund in the future. That seems like a wrong explanation.
I think this is a very useful way of putting it. I would be interested in anyone trying to actually quantify this (even to just get the right order of magnitude from the top). I suspect you have already done something in this direction when you decide what jobs to list on your job board.