Agree with this. I just want to be super clear that I think entrepreneurs should optimise for something like cost-effectiveness x scale.
I think research & advocacy orgs can often be 10x more cost-effective than big physical projects, so a $10m research org might be as impactful as a $100m physical org, so it’s sometimes going to be the right call.
But I think the EA mindset probably focuses a bit too much on cost-effectiveness rather than scale (since we approach it from the marginal donor perspective rather than the entrepreneur one). If we’re also leadership constrained, we might prefer a smaller number of bigger projects, and the bigger projects often have bigger externalities.
Overall, I agree we should be considering big physical projects, and agree these probably require different skills.
The reason most EA founders (and aspiring founders) act as if money is scares, is because the lived experience of most EA founders is that money is hard to get. As far as I know, this is true in all cause areas, including long-termism.
Yes—part of the reason this the funding overhang dynamic is happening in the first place is that it’s really hard to think of a project that has a clearly net positive return from a longtermist perspective, and even harder to put it into practice.
4) There is plenty of funding, a fair number of interested junior employees, and also some ideas for megaprojects. The biggest bottleneck seems like leadership. Second would be more and better ideas.
If there is plenty of funding, is it just in the wrong place? Given Ben’s latest post should we be encouraging donations to the EA Infrastructure Fund (and Long-Term Future Fund) rather than the Global Health and Development Fund, which currently has over $7m available?
Hi, thanks for mentioning this—I am the chairperson of the EA Infrastructure Fund and wanted to quickly comment on this: We do have room for more funding, but the $65k number is too low. As of one week ago, the EAIF had at least $290k available. (The website for me now does show $270k, not $65k.)
It is currently hard to get accurate numbers, including for ourselves at EA Funds, due to an accounting change at CEA. Apologies for any confusion this might cause. We will fix the number on the website as soon as possible, and will also soon provide more reliable info on our room for more funding in an EA Forum post or comment.
ETA: according to a new internal estimate, as of August 10th the EAIF had $444k available.
I’d be happy to see more going to meta at the margin, though I’d want to caution against inferring much from how much the EA Infastructure Fund has available right now.
The key question is something like “can they identify above-the-bar projects that are not getting funded otherwise?”
I believe the Infrastructure team has said they could fund a couple of million dollars worth of extra projects, and if so, I hope that gets funded.
Though even that also doesn’t tell us much about the overall situation. Even in a world with a big funding overhang, we should expect there to be some gaps.
I think one thing that people, both in and outside of EA orgs, find confusing is that we don’t have a sense of how high the standards of marginal cost-effectiveness ought to be before it’s worth scaling at all. Related concepts include “Open Phil’s last dollar” and “quality standards/”
In global health I think there’s a clear minimal benchmark (something like “$s given to GiveDirectly at >10B/year scales”), but it’s not clear I think whether people should bother creating scalable charities that are slightly better in expectation (say 2x) than GiveDirectly or if they ought to have a plausible case to be competing with marginal Malaria Consortium or AMF or deworming donations (which I think is estimated at current disease burdens, moral value of life vs economic benefits, etc, to be ~5-25x(?) the impact of GiveDirectly).
In longtermism I think the situation is murkier. There’s no minimal baseline at all (except maybe GiveDirectly again, which is now more reliant on moral beliefs rather than empirical beliefs about the world), so I think people are just quite confused in general whether what’s scaling looks more like “90th percentile climate change intervention” vs “has a plausible shot of being the most important AI alignment intervention.”
In animal welfare it’s somewhere in between. I think corporate campaigns a) looks like a promising marginal use of money and b)our uncertainty about its impact ranges more like 2 orders of magnitude (rather than ~1 for global health and ~infinite for longtermism). But comparing scalable interventions to existing corporate campaigns is premised on there not being lots of $s that’d flood the animal welfare space in the future, and I think this is a quite uncertain proposition in practice.
Meta is at least as confused as the object-level charities because you’re multiplying the uncertainty of doing the meta work to the uncertainty of how it feeds into the object-level work, so it should be more confused, not less.
Personally, my own best guess is that I think when people are confused about what quality standards to aim at, they default to either a) sputtering around or b) doing the highest quality things possible instead of consciously and carefully think about what things can scale while maintaining (or accepting slightly worse) current quality, which means we currently implicitly overestimate the value of the last EA dollar.
I’m inside-view pretty convinced last-dollar uncertainty is a really big deal in practice, yet many grantmakers seem to disagree (see eg comments here), I’m not sure where the intuition differences lie.
I agree this is a big issue, and my impression is many grantmakers agree.
In longtermism, I think the relevant benchmark is indeed something like OP’s last dollar in the longtermism worldview bucket. Ideally, you’d also include the investment returns you’ll earn between now and when that’s spent. This is extremely uncertain.
Another benchmark would be something like offsetting CO2, which is most likely positive for existential risk and could be done at a huge scale. Personally, I hope we can find things that are a lot better than this, so I don’t think it’s the most relevant benchmark—more of a lower bound.
In some ways, meta seems more straightforward—the benchmark should be can you produce more than 1 unit of resources (NPV) per unit that you use?
I agree this is a big issue, and my impression is many grantmakers agree.
Hmm I’d love to see some survey results or a more representative sample. I often have trouble telling whether my opinions are contrarian or boringly mainstream!
Another benchmark would be something like offsetting CO2, which is most likely positive for existential risk and could be done at a huge scale. Personally, I hope we can find things that are a lot better than this, so I don’t think it’s the most relevant benchmark—more of a lower bound.
I wonder if this is better or worse than buying up fractions of AI companies?
In some ways, meta seems more straightforward—the benchmark should be can you produce more than 1 unit of resources (NPV) per unit that you use?
I think I agree, but I’m not confident about this, because this feels maybe too high-level? “1 unit” seems much more heterogeneous and less fungible when the resources we’re thinking of is “people” or (worse) “conceptual breakthroughs” (as might be the case for cause prio work), and there are lots of ways that things are in practice pretty hard to compare, including but not limited to sign flips.
I should have probably have just said that OP seem very interested in the last dollar problem (and that’s ~60% of grantmaking capacity).
Agree with your comments on meta.
With cause pri research, I’d be trying to think about how much more effectively it lets us spend the portfolio e.g. a 1% improvement to $420 million per year is worth about $4.2m per year.
With a bunch of unrealistic assumptions (like constant cost-effectiveness), the counterfactual impact should be (impact/resource - opportunitycost/resource) * resource.
If impact/resource is much bigger than opportunitycost/resource (so that the latter is negligible) this is roughly equal to impact/resource * resource, which is one reading of cost-effectiveness * scale.
If so, assuming that resource=$ in this case, this roughly translates to the heuristic “if the opportunity cost of money isn’t that high (compared to your project), you should optimise for total impact without thinking much about the monetary costs”.
We could also read “impact/resource - opportunitycost/resource” as a cost-effectiveness estimate that takes opportunity costs into account. I think Charity Entrepreneurship has been optimizing for this (at least sometimes, based on the work I’ve seen in the animal space) and they refer to it as a cost-effectiveness estimate, but I think this is not typical in EA.
If impact/resource is much bigger than opportunitycost/resource (so that the latter is negligible) this is roughly equal to impact/resource * resource, which is one reading of cost-effectiveness * scale.
Also, this is looking more like cost-benefit analysis than cost-effectiveness analysis.
Agree with this. I just want to be super clear that I think entrepreneurs should optimise for something like cost-effectiveness x scale.
I think research & advocacy orgs can often be 10x more cost-effective than big physical projects, so a $10m research org might be as impactful as a $100m physical org, so it’s sometimes going to be the right call.
But I think the EA mindset probably focuses a bit too much on cost-effectiveness rather than scale (since we approach it from the marginal donor perspective rather than the entrepreneur one). If we’re also leadership constrained, we might prefer a smaller number of bigger projects, and the bigger projects often have bigger externalities.
Overall, I agree we should be considering big physical projects, and agree these probably require different skills.
The reason most EA founders (and aspiring founders) act as if money is scares, is because the lived experience of most EA founders is that money is hard to get. As far as I know, this is true in all cause areas, including long-termism.
Yes—part of the reason this the funding overhang dynamic is happening in the first place is that it’s really hard to think of a project that has a clearly net positive return from a longtermist perspective, and even harder to put it into practice.
Yeah, in the same thread Ben tweets:
But the EA Infrastructure Fund currently only has ~$65k available
If there is plenty of funding, is it just in the wrong place? Given Ben’s latest post should we be encouraging donations to the EA Infrastructure Fund (and Long-Term Future Fund) rather than the Global Health and Development Fund, which currently has over $7m available?
Hi, thanks for mentioning this—I am the chairperson of the EA Infrastructure Fund and wanted to quickly comment on this: We do have room for more funding, but the $65k number is too low. As of one week ago, the EAIF had at least $290k available. (The website for me now does show $270k, not $65k.)
It is currently hard to get accurate numbers, including for ourselves at EA Funds, due to an accounting change at CEA. Apologies for any confusion this might cause. We will fix the number on the website as soon as possible, and will also soon provide more reliable info on our room for more funding in an EA Forum post or comment.
ETA: according to a new internal estimate, as of August 10th the EAIF had $444k available.
I have edited all our fund pages to include the following sentence:
I’d be happy to see more going to meta at the margin, though I’d want to caution against inferring much from how much the EA Infastructure Fund has available right now.
The key question is something like “can they identify above-the-bar projects that are not getting funded otherwise?”
I believe the Infrastructure team has said they could fund a couple of million dollars worth of extra projects, and if so, I hope that gets funded.
Though even that also doesn’t tell us much about the overall situation. Even in a world with a big funding overhang, we should expect there to be some gaps.
Epistemic status: Moderate opinion, held weakly.
I think one thing that people, both in and outside of EA orgs, find confusing is that we don’t have a sense of how high the standards of marginal cost-effectiveness ought to be before it’s worth scaling at all. Related concepts include “Open Phil’s last dollar” and “quality standards/”
In global health I think there’s a clear minimal benchmark (something like “$s given to GiveDirectly at >10B/year scales”), but it’s not clear I think whether people should bother creating scalable charities that are slightly better in expectation (say 2x) than GiveDirectly or if they ought to have a plausible case to be competing with marginal Malaria Consortium or AMF or deworming donations (which I think is estimated at current disease burdens, moral value of life vs economic benefits, etc, to be ~5-25x(?) the impact of GiveDirectly).
In longtermism I think the situation is murkier. There’s no minimal baseline at all (except maybe GiveDirectly again, which is now more reliant on moral beliefs rather than empirical beliefs about the world), so I think people are just quite confused in general whether what’s scaling looks more like “90th percentile climate change intervention” vs “has a plausible shot of being the most important AI alignment intervention.”
In animal welfare it’s somewhere in between. I think corporate campaigns a) looks like a promising marginal use of money and b)our uncertainty about its impact ranges more like 2 orders of magnitude (rather than ~1 for global health and ~infinite for longtermism). But comparing scalable interventions to existing corporate campaigns is premised on there not being lots of $s that’d flood the animal welfare space in the future, and I think this is a quite uncertain proposition in practice.
Meta is at least as confused as the object-level charities because you’re multiplying the uncertainty of doing the meta work to the uncertainty of how it feeds into the object-level work, so it should be more confused, not less.
Personally, my own best guess is that I think when people are confused about what quality standards to aim at, they default to either a) sputtering around or b) doing the highest quality things possible instead of consciously and carefully think about what things can scale while maintaining (or accepting slightly worse) current quality, which means we currently implicitly overestimate the value of the last EA dollar.
I’m inside-view pretty convinced last-dollar uncertainty is a really big deal in practice, yet many grantmakers seem to disagree (see eg comments here), I’m not sure where the intuition differences lie.
I agree this is a big issue, and my impression is many grantmakers agree.
In longtermism, I think the relevant benchmark is indeed something like OP’s last dollar in the longtermism worldview bucket. Ideally, you’d also include the investment returns you’ll earn between now and when that’s spent. This is extremely uncertain.
Another benchmark would be something like offsetting CO2, which is most likely positive for existential risk and could be done at a huge scale. Personally, I hope we can find things that are a lot better than this, so I don’t think it’s the most relevant benchmark—more of a lower bound.
In some ways, meta seems more straightforward—the benchmark should be can you produce more than 1 unit of resources (NPV) per unit that you use?
Hmm I’d love to see some survey results or a more representative sample. I often have trouble telling whether my opinions are contrarian or boringly mainstream!
I wonder if this is better or worse than buying up fractions of AI companies?
I think I agree, but I’m not confident about this, because this feels maybe too high-level? “1 unit” seems much more heterogeneous and less fungible when the resources we’re thinking of is “people” or (worse) “conceptual breakthroughs” (as might be the case for cause prio work), and there are lots of ways that things are in practice pretty hard to compare, including but not limited to sign flips.
I should have probably have just said that OP seem very interested in the last dollar problem (and that’s ~60% of grantmaking capacity).
Agree with your comments on meta.
With cause pri research, I’d be trying to think about how much more effectively it lets us spend the portfolio e.g. a 1% improvement to $420 million per year is worth about $4.2m per year.
So just total impact?
Yes, basically—if you’re starting a new project, then all else equal, go for the one with highest potential total impact.
Instead, people often focus on setting up the most cost-effective project, which is a pretty different thing.
This isn’t a complete model by any means, though :) Agree with what Lukas is saying below.
With a bunch of unrealistic assumptions (like constant cost-effectiveness), the counterfactual impact should be (impact/resource - opportunitycost/resource) * resource.
If impact/resource is much bigger than opportunitycost/resource (so that the latter is negligible) this is roughly equal to impact/resource * resource, which is one reading of cost-effectiveness * scale.
If so, assuming that resource=$ in this case, this roughly translates to the heuristic “if the opportunity cost of money isn’t that high (compared to your project), you should optimise for total impact without thinking much about the monetary costs”.
Good point.
We could also read “impact/resource - opportunitycost/resource” as a cost-effectiveness estimate that takes opportunity costs into account. I think Charity Entrepreneurship has been optimizing for this (at least sometimes, based on the work I’ve seen in the animal space) and they refer to it as a cost-effectiveness estimate, but I think this is not typical in EA.
Also, this is looking more like cost-benefit analysis than cost-effectiveness analysis.