Hmm, it feels unclear to me what youâre claiming here. In particular, Iâm not sure which of the following is your claim:
âRight now all money committed to EA could be spent on things that we currently (should) think are at least slightly net positive in expectation. (Even if we maybe shouldnât spend on those things yet, since maybe we should wait for even better opportunities.)â
âRight now all money committed to EA could be spent on things that might be net positive in expectation. (But there arenât enough identified opportunities that we currently think are net positive to absorb all current EA money. Some of the things currently look net negative but with high uncertainty, and we need to do further research or wait till things naturally become closer and clearer to find out which are net positive. We also need to find more opportunities.)â
1 is a stronger and more interesting claim than 2. But you donât seem to make it clear which one youâre saying.
If 2 is true, then we still are âseverely bottlenecked by good funding opportunitiesâ + by strategic clarity. So it might be that the people youâre talking to are already thinking 2, rather than that EA funding is effectively infinite?
To be clear, I do think 2 is importantly different from âwe have effectively infinite moneyâ, in particular in that it pushes in favor of not spending on extremely slightly net positive funding opportunities now since we want to save money for when weâve learned more about which of the known maybe-good huge funding opportunties are good.* So if there are people acting and thinking as though we have effectively infinite money, I do think they should get ~this message. But I think your shortform could maybe benefit from distinguishing 1 and 2.
(Also, a nit-picky point: Iâd suggest avoiding phrasing like âcould plausibly absorb all EA fundingâ without a word like âproductivelyâ, since of course there are things that can literally just absorb our fundingâliterally just spending is easy.)
*E.g., personally I think trying to spend >$1b in 2023 on each of the AI things you mentioned would probably require spending on some net negative in expectation things, but I also think that we should keep those ideas in mind for future and spend a bit slower on other things for that reason.
Hmm, it feels unclear to me what youâre claiming here. In particular, Iâm not sure which of the following is your claim:
âRight now all money committed to EA could be spent on things that we currently (should) think are at least slightly net positive in expectation. (Even if we maybe shouldnât spend on those things yet, since maybe we should wait for even better opportunities.)â
âRight now all money committed to EA could be spent on things that might be net positive in expectation. (But there arenât enough identified opportunities that we currently think are net positive to absorb all current EA money. Some of the things currently look net negative but with high uncertainty, and we need to do further research or wait till things naturally become closer and clearer to find out which are net positive. We also need to find more opportunities.)â
1 is a stronger and more interesting claim than 2. But you donât seem to make it clear which one youâre saying.
If 2 is true, then we still are âseverely bottlenecked by good funding opportunitiesâ + by strategic clarity. So it might be that the people youâre talking to are already thinking 2, rather than that EA funding is effectively infinite?
To be clear, I do think 2 is importantly different from âwe have effectively infinite moneyâ, in particular in that it pushes in favor of not spending on extremely slightly net positive funding opportunities now since we want to save money for when weâve learned more about which of the known maybe-good huge funding opportunties are good.* So if there are people acting and thinking as though we have effectively infinite money, I do think they should get ~this message. But I think your shortform could maybe benefit from distinguishing 1 and 2.
(Also, a nit-picky point: Iâd suggest avoiding phrasing like âcould plausibly absorb all EA fundingâ without a word like âproductivelyâ, since of course there are things that can literally just absorb our fundingâliterally just spending is easy.)
*E.g., personally I think trying to spend >$1b in 2023 on each of the AI things you mentioned would probably require spending on some net negative in expectation things, but I also think that we should keep those ideas in mind for future and spend a bit slower on other things for that reason.