The EA funding pool is large, but not infinite. This statement is nothing to write home about, but I’ve noticed quite a few EAs I talk to view longtermist/x-risk EA funding as effectively infinite, the notion being that we’re severely bottlenecked by good funding opportunities.
I think this might be erroneous.
Here are some areas that could plausibly absorb all EA funding, right now:
Biorisk
Better sequencing
Better surveillance
Developing and deploying PPE
Large-scale philanthropic response to a pandemic
AI risk
Policy spending (especially in the US)
AI chips
either scaling up chip production, or buying up top-of-the-range chips
Backing the lab(s) that we might want to get to TAI/AGI/HLMI/PASTA first
(Note: I’m definitely not saying we should fund these things, but I am pointing out that there are large funding opportunities out there which potentially meet the funding bar. For what it’s worth, my true thinking is something closer to: “We should reserve most of our funding for shaping TAI come crunch time, and/or once we have better strategic clarity.”
Note also: Perhaps some, or all, of these don’t actually work, and perhaps there are many more examples I’m missing—I only spent ~3 mins brainstorming the above. I’m also pretty sure this wasn’t a totally original brainstorm, and that I was remembering these examples having read something on a similar topic somewhere, probably here on the Forum, though I can’t recall which post it was.)
Hmm, it feels unclear to me what you’re claiming here. In particular, I’m not sure which of the following is your claim:
“Right now all money committed to EA could be spent on things that we currently (should) think are at least slightly net positive in expectation. (Even if we maybe shouldn’t spend on those things yet, since maybe we should wait for even better opportunities.)”
“Right now all money committed to EA could be spent on things that might be net positive in expectation. (But there aren’t enough identified opportunities that we currently think are net positive to absorb all current EA money. Some of the things currently look net negative but with high uncertainty, and we need to do further research or wait till things naturally become closer and clearer to find out which are net positive. We also need to find more opportunities.)”
1 is a stronger and more interesting claim than 2. But you don’t seem to make it clear which one you’re saying.
If 2 is true, then we still are “severely bottlenecked by good funding opportunities” + by strategic clarity. So it might be that the people you’re talking to are already thinking 2, rather than that EA funding is effectively infinite?
To be clear, I do think 2 is importantly different from “we have effectively infinite money”, in particular in that it pushes in favor of not spending on extremely slightly net positive funding opportunities now since we want to save money for when we’ve learned more about which of the known maybe-good huge funding opportunties are good.* So if there are people acting and thinking as though we have effectively infinite money, I do think they should get ~this message. But I think your shortform could maybe benefit from distinguishing 1 and 2.
(Also, a nit-picky point: I’d suggest avoiding phrasing like “could plausibly absorb all EA funding” without a word like “productively”, since of course there are things that can literally just absorb our funding—literally just spending is easy.)
*E.g., personally I think trying to spend >$1b in 2023 on each of the AI things you mentioned would probably require spending on some net negative in expectation things, but I also think that we should keep those ideas in mind for future and spend a bit slower on other things for that reason.
I’m also pretty sure this wasn’t a totally original brainstorm, and that I was remembering these examples having read something on a similar topic somewhere, probably here on the Forum, though I can’t recall which post it was
I’m also pretty sure this wasn’t a totally original brainstorm, and that I was remembering these examples having read something on a similar topic somewhere, probably here on the Forum, though I can’t recall which post it was.
Maybe it was some combination of the posts with the megaprojects tag?
We could spend all longtermist EA money, now.
(This is a sort-of sequel to my previous shortform.)
The EA funding pool is large, but not infinite. This statement is nothing to write home about, but I’ve noticed quite a few EAs I talk to view longtermist/x-risk EA funding as effectively infinite, the notion being that we’re severely bottlenecked by good funding opportunities.
I think this might be erroneous.
Here are some areas that could plausibly absorb all EA funding, right now:
Biorisk
Better sequencing
Better surveillance
Developing and deploying PPE
Large-scale philanthropic response to a pandemic
AI risk
Policy spending (especially in the US)
AI chips
either scaling up chip production, or buying up top-of-the-range chips
Backing the lab(s) that we might want to get to TAI/AGI/HLMI/PASTA first
(Note: I’m definitely not saying we should fund these things, but I am pointing out that there are large funding opportunities out there which potentially meet the funding bar. For what it’s worth, my true thinking is something closer to: “We should reserve most of our funding for shaping TAI come crunch time, and/or once we have better strategic clarity.”
Note also: Perhaps some, or all, of these don’t actually work, and perhaps there are many more examples I’m missing—I only spent ~3 mins brainstorming the above. I’m also pretty sure this wasn’t a totally original brainstorm, and that I was remembering these examples having read something on a similar topic somewhere, probably here on the Forum, though I can’t recall which post it was.)
Hmm, it feels unclear to me what you’re claiming here. In particular, I’m not sure which of the following is your claim:
“Right now all money committed to EA could be spent on things that we currently (should) think are at least slightly net positive in expectation. (Even if we maybe shouldn’t spend on those things yet, since maybe we should wait for even better opportunities.)”
“Right now all money committed to EA could be spent on things that might be net positive in expectation. (But there aren’t enough identified opportunities that we currently think are net positive to absorb all current EA money. Some of the things currently look net negative but with high uncertainty, and we need to do further research or wait till things naturally become closer and clearer to find out which are net positive. We also need to find more opportunities.)”
1 is a stronger and more interesting claim than 2. But you don’t seem to make it clear which one you’re saying.
If 2 is true, then we still are “severely bottlenecked by good funding opportunities” + by strategic clarity. So it might be that the people you’re talking to are already thinking 2, rather than that EA funding is effectively infinite?
To be clear, I do think 2 is importantly different from “we have effectively infinite money”, in particular in that it pushes in favor of not spending on extremely slightly net positive funding opportunities now since we want to save money for when we’ve learned more about which of the known maybe-good huge funding opportunties are good.* So if there are people acting and thinking as though we have effectively infinite money, I do think they should get ~this message. But I think your shortform could maybe benefit from distinguishing 1 and 2.
(Also, a nit-picky point: I’d suggest avoiding phrasing like “could plausibly absorb all EA funding” without a word like “productively”, since of course there are things that can literally just absorb our funding—literally just spending is easy.)
*E.g., personally I think trying to spend >$1b in 2023 on each of the AI things you mentioned would probably require spending on some net negative in expectation things, but I also think that we should keep those ideas in mind for future and spend a bit slower on other things for that reason.
Perhaps thinking of this post?
Maybe it was some combination of the posts with the megaprojects tag?