This seems almost entirely useless; I donât think this would help at all.
Iâm pretty surprised by the strength of that reaction. Some followups:
How do you square that with the EA Funds (a) funding things that would increase the amount/âquality/âimpact of EA-aligned research(ers), and (b) indicating in some places (e.g. here) the funds have room for more funding?
Is it that they have room for more funding only for things other than supporting EA-aligned research(ers)?
Do you disagree that the funds have room for more funding?
Do you think increasing available funding wouldnât help with any EA stuff, or do you just mean for increasing the amount/âquality/âimpact of EA-aligned research(ers)?
Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?
Re 1: I think that the funds can maybe disburse more money (though Iâm a little more bearish on this than Jonas and Max, I think). But I donât feel very excited about increasing the amount of stuff we fund by lowering our bar; as Iâve said elsewhere on the AMA the limiting factor on a grant to me usually feels more like âis this grant so bad that it would damage things (including perhaps EA culture) in some way for me to make itâ than âis this grant good enough to be worth the moneyâ.
I think that the fundsâ RFMF is only slightly realâI think that giving to the EAIF has some counterfactual impact but not very much, and the impact comes from slightly weird places. For example, I personally have access to EA funders who are basically always happy to fund things that I want them to fund. So being an EAIF fund manager doesnât really increase my ability to direct money at promising projects that I run across. (Itâs helpful to have the grant logistics people from CEA, though, which makes the EAIF grantmaking experience a bit nicer.) The advantages I get from being an EAIF fund manager are that EAIF seeks applications and so I get to make grants I wouldnât have otherwise known about, and also that Michelle, Max, and Jonas sometimes provide useful second opinions on grants.
And so I think that if you give to the EAIF, I do slightly more good via grantmaking. But the mechanism is definitely not via me having access to more money.
Is it that they have room for more funding only for things other than supporting EA-aligned research(ers)?
I think that it will be easier to increase our grantmaking for things other than supporting EA-aligned researchers with salaries, because this is almost entirely limited by how many strong candidates there are, and it seems hard to increase this directly with active grantmaking. In contrast, I feel more optimistic about doing active grantmaking to encourage retreats for researchers etc.
Do you think increasing available funding wouldnât help with any EA stuff, or do you just mean for increasing the amount/âquality/âimpact of EA-aligned research(ers)?
I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.
I think that increasing available funding basically wonât help at all for causing interventions of the types you listed in your postâall of those are limited by factors other than funding.
(Non-longtermist EA is more funding constrained of courseâthereâs enormous amounts of RFMF in GiveWell charities, and my impression is that farm animal welfare also could absorb a bunch of money.)
Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?
Yes, I basically think of this as an almost complete waste of time and money from a longtermist perspective (and probably neartermist perspectives too). I think that research on effective giving is particularly useless because I think that that projects differ widely in their value, and my impression is that effective giving is mostly going to get people to give to relatively bad giving opportunities.
High Impact Athletes is an EAIF grantee who I feel positive about; I am enthusiastic about them not because they might raise funds but because they might be able to get athletes to influence culture various ways (eg influencing public feelings about animal agriculture etc). And so I think it makes sense for them to initially focus on fundraising, but thatâs not where I expect most of their value to come from.
I am willing to fund orgs that attempt to just do fundraising, if their multiplier on their expenses is pretty good, because marginal money has more than zero value and Iâd rather we had twice as much money. But I think that working for such an org is unlikely to be very impactful.
I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.
At first glance the 20% figure sounded about right to me. However, when thinking a bit more about it, Iâm worried that (at least in my case) this is too anchored on imagining âbusiness as usual, but with more total capitalâ. Iâm wondering if most of the expected value of an additional $100Bâespecially when controlled by a single donor who can flexibly deploy themâcomes from âcrazyâ and somewhat unlikely-to-pan-out options. I.e., things like:
Building an âEA cityâ somewhere
Buying a majority of shares of some AI company (or of relevant hardware companies)
Being able to spend tens of billions of $ on compute, at a time when few other actors are willing to do so
Buying the New York Times
Being among the first actors settling Mars
(Tbc, I think most of these things would be kind of dumb or impossible as stated, and maybe a ârealisticâ additional donor wouldnât be open to such things. Iâm just gesturing at the rough shape of things which I suspect might contain a lot of the expected value.)
I think that âbusiness as usual but with more total capitalâ leads to way less increased impact than 20%; I am taking into account the fact that weâd need to do crazy new types of spending.
Incidentally, you canât buy the New York Times on public markets; youâd have to do a private deal with the family who runs it
Hmm. Then Iâm not sure I agree. When I think of prototypical example scenarios of âbusiness as usual but with more total capitalâ I kind of agree that they seem less valuable than +20%. But on the other hand, I feel like if I tried to come up with some first-principle-based âutility functionâ Iâd be surprised if it had returns than diminish much more strongly than logarithmic. (Thatâs at least my initial intuitionânot sure I could justify it.) And if it was logarithmic, going from $10B to $100B should add about as much value than going from $1B to $10B, and I feel like the former adds clearly more than 20%.
(I guess there is also the question what exactly weâre assuming. E.g., should the fact that this additional $100B donor appears also make me more optimistic about the growth and ceiling of total longtermist-aligned capital going forward? If not, i.e. if I should compare the additional $100B to the net present expected value of all longtermist capital that will ever appear, then Iâm much more inclined to agree with âbusiness as usual + this extra capital adds much less than 20%â. In this latter case, getting the $100B now might simply compress the period of growth of longtermist capital from a few years or decades to a second, or something like that.)
OK, on a second thought I think this argument doesnât work because itâs basically double-counting: the reason why returns might not diminish much faster than logarithmic may be precisely that new, âcrazyâ opportunities become available.
A production function roughly along the lines of utility = funding ^ 0.2 * talent ^ 0.6 (this has diminishing returns to funding*talent, but the returns diminish slowly)
A default assumption that longtermism will eventually end up with $30-$300B in funding, letâs assume $100B
Increasing the funding from $100B to $200B would then increase utility by 15%.
> Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?
Yes, I basically think of this as an almost complete waste of time and money from a longtermist perspective (and probably neartermist perspectives too).
Just wanted to flag briefly that I personally disagree with this:
I think that fundraising projects can be mildly helpful from a longtermist perspective if they are unusually good at directing the money really well (i.e., match or beat Open Philâs last dollar), and are truly increasing overall resources*. I think that thereâs a high chance that more financial resources wonât be helpful at all, but some small chance that they will be, so the EV is still weakly positive.
I think that fundraising projects can be moderately helpful from a neartermist perspective if they are truly increasing overall resources*.
* Some models/âcalculations that Iâve seen donât do a great job of modelling the overall ROI from fundraising. They need to take into account not just the financial cost but also the talent cost of the project (which should often be valued at rates vastly higher than are common in the private sector), the counterfactual donations /â Shapley value (the fundraising organization often doesnât deserve 100% of the credit for the money raised â some of the credit goes to the donor!), and a ~10-15% annual discount rate (this is the return I expect for smart, low-risk financial investments).
I still somewhat share Buckâs overall sentiment: I think fundraising runs the risk of being a bit of a distraction. I personally regret co-running a fundraising organization and writing a thesis paper about donation behavior. Iâd rather have spent my time learning about AI policy (or, if I was a neartermist, I might say e.g. charter cities, growth diagnostics in development economics, NTD eradication programs, or factory farming in developing countries). I would love if EAs generally spent less time worrying about money and more about recruiting talent, improving the trajectory of the community, and solving the problems on the object level.
Overall, I want to continue funding good fundraising organizations.
I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.
Iâm curious how much $s you and others think that longtermist EA has access to right now/âwill have access to in the near future. The 20% number seems like a noticeably weaker claim if longtermist EA currently has access to 100B than if we currently have access to 100M.
I actually think this is surprisingly non-straightforward. Any estimate of the net present value of total longtermist $$ will have considerable uncertainty because itâs a combination of several things, many of which are highly uncertain:
How much longtermist $$ is there now?
This is the least uncertain one. Itâs not super straightforward and requires nonpublic knowledge about the wealth and goals of some large individual donors, but Iâd be surprised if my estimate on this was off by 10x.
What will the financial returns on current longtermist $$ be before theyâre being spent?
Over long timescales, for some of that capital, this might be âonlyâ as volatile as the stock market or some other âbroadâ index.
But for some share of that capital (as well as on shorter time scale) this will be absurdly volatile. Cf. the recent fortunes some EAs have made in crypto.
How much new longtermist $$ will come in at which times in the future?
This seems highly uncertain because itâs probably very heavy-tailed. E.g., there may well be a single source that increases total capital by 2x or 10x. Naturally, predicting the timing of such a single event will be quite uncertain on a time scale of years or even decades.
What should the discount rate for longtermist $$ be?
Over the last year, someone who has thought about this quite a bit told me first that they had updated from 10% per year to 6%, and then a few months later back again. This is a difference of one order of magnitude for $$ coming in in 50 years.
What counts as longtermist $$? If, e.g., the US government started spending billions on AI safety or biosecurity, most of which goes to things that from a longtermist EA perspective are kind of but not super useful, how would that count?
I think for some narrow notion of roughly âlongtermist $$ as âalignedâ as Open Philâs longtermist potâ my 80% credence interval for the net present value is $30B - $1 trillion. Iâm super confused how to think about the upper end because the 90th percentile case is some super weird transformative AI future. Maybe I should instead say that my 50% credence interval is $20B - $200B.
Generally my view on this isnât that well considered and probably not that resilient.
⊠my 80% credence interval for the net present value is $30B - $1 trillion. Iâm super confused how to think about the upper end because the 90th percentile case is some super weird transformative AI future. Maybe I should instead say that my 50% credence interval is $20B - $200B.âČ [emphases added]
Shouldnât your lower bound for the 50% interval be higher than for the 80% interval? Or is the second interval based on different assumptions, e.g. including/âruling out some AI stuff?
(Not sure this is an important question, given how much uncertainty there is in these numbers anyway.)
Shouldnât your lower bound for the 50% interval be higher than for the 80% interval?
If the intervals were centeredâi.e., spanning the 10th to 90th and the 25th to 75th percentile, respectivelyâthen it should be, yes.
I could now claim that I wasnât giving centered intervals, but I think what is really going on is that my estimates are not diachronically consistent even if I make them within 1 minute of each other.
I think we roughly agree on the direct effect of fundraising orgs, promoting effective giving, etc., from a longtermist perspective.
However, I suspect Iâm (perhaps significantly) more optimistic than you about âindirectâ effects from promoting good content and advice on effective giving, promoting it as a âsocial normâ, etc. This is roughly because of the view I state under the first key uncertainty here, i.e., I suspect that encountering effective giving can for some people be a âgatewayâ toward more impactful behaviors.
One issue is that I think the sign and absolute value of these indirect effects are not that well correlated with the proxy goals such organizations would optimize, e.g., amount of money raised. For example, Iâd guess itâs much better for these indirect effects if the org is also impressive intellectually or entrepreneurially; if it produces âevangelistsâ rather than just people whoâll start giving 1% as a âhobbyâ, are quiet about it, and otherwise donât think much about it; if it engages in higher-bandwidth interactions with some of its audience; and if, in communications it at least sometimes mentions other potentially impactful behaviors.
So, e.g., GiveWell by these lights looks much better than REG, which in turns looks much better than, say, buying Facebook ads for AMF.
(Iâm also quite uncertain about all of this. E.g., I wouldnât be shocked if after significant additional consideration I ended up thinking that the indirect effects of promoting effective givingâeven in a âgoodâ wayâwere significantly net negative.)
When I said that the EAIF and LTFF have room for more funding, I didnât mean to say âEA research is funding-constrainedâ but âI think some of the abundant EA research funding should be allocated here.â
Saying âthis particular pot has room for more fundingâ can be fully consistent with the overall ecosystem being saturated with funding.
Do you think increasing available funding wouldnât help with any EA stuff
I think it definitely helps a lot with neartermist interventions. I also think it still makes a substantial* difference in longtermism, including research â but the difference you can make through direct work is plausibly vastly greater (>10x greater).
* Substantial in the sense âif you calculate the expected impact, itâll be hugeâ, not âsubstantial relative to the EA communityâs total impact.â
When I said that the EAIF and LTFF have room for more funding, I didnât mean to say âEA research is funding-constrainedâ but âI think some of the abundant EA research funding should be allocated here.â
Ah, good point. So is your independent impression that the very large donors (e.g., Open Phil) are making a mistake by not multiplying the total funding allocated to EAIF and LTFF by (say) a factor of 0.5-5?
(I donât think that that is a logically necessary consequence of what you said, but seems like it could be a consequence of what you said + some plausible other premises.
I ask about the very large donors specifically because things youâve said elsewhere already indicate you think smaller donors are indeed often making a mistake by not allocating more funding to EAIF and LTFF. But maybe Iâm wrong about that.)
Hmm, why do you think this? I donât remember having said that.
Actually I now think I was just wrong about that, sorry. I had been going off of vague memories, but when I checked your post history now to try to work out what I was remembering, I realised it may have been my memory playing weird tricks based on your donor lottery post, which actually made almost the opposite claim. Specifically, you say âFor this reason, we believe that a donor lottery is the most effective way for most smaller donors to give the majority of their donations, for those who feel comfortable with it.â
(Which implies you think that thatâs a more effective way for most smaller donors to give than giving to the EA Funds right awayârather than after winning a lottery and maybe ultimately deciding to give to the EA Funds.)
I think I may have been kind-of remembering what David Moss said as if it was your view, which is weird, since David was pushing against what you said.
Iâm pretty surprised by the strength of that reaction. Some followups:
How do you square that with the EA Funds (a) funding things that would increase the amount/âquality/âimpact of EA-aligned research(ers), and (b) indicating in some places (e.g. here) the funds have room for more funding?
Is it that they have room for more funding only for things other than supporting EA-aligned research(ers)?
Do you disagree that the funds have room for more funding?
Do you think increasing available funding wouldnât help with any EA stuff, or do you just mean for increasing the amount/âquality/âimpact of EA-aligned research(ers)?
Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?
Re 1: I think that the funds can maybe disburse more money (though Iâm a little more bearish on this than Jonas and Max, I think). But I donât feel very excited about increasing the amount of stuff we fund by lowering our bar; as Iâve said elsewhere on the AMA the limiting factor on a grant to me usually feels more like âis this grant so bad that it would damage things (including perhaps EA culture) in some way for me to make itâ than âis this grant good enough to be worth the moneyâ.
I think that the fundsâ RFMF is only slightly realâI think that giving to the EAIF has some counterfactual impact but not very much, and the impact comes from slightly weird places. For example, I personally have access to EA funders who are basically always happy to fund things that I want them to fund. So being an EAIF fund manager doesnât really increase my ability to direct money at promising projects that I run across. (Itâs helpful to have the grant logistics people from CEA, though, which makes the EAIF grantmaking experience a bit nicer.) The advantages I get from being an EAIF fund manager are that EAIF seeks applications and so I get to make grants I wouldnât have otherwise known about, and also that Michelle, Max, and Jonas sometimes provide useful second opinions on grants.
And so I think that if you give to the EAIF, I do slightly more good via grantmaking. But the mechanism is definitely not via me having access to more money.
I think that it will be easier to increase our grantmaking for things other than supporting EA-aligned researchers with salaries, because this is almost entirely limited by how many strong candidates there are, and it seems hard to increase this directly with active grantmaking. In contrast, I feel more optimistic about doing active grantmaking to encourage retreats for researchers etc.
I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.
I think that increasing available funding basically wonât help at all for causing interventions of the types you listed in your postâall of those are limited by factors other than funding.
(Non-longtermist EA is more funding constrained of courseâthereâs enormous amounts of RFMF in GiveWell charities, and my impression is that farm animal welfare also could absorb a bunch of money.)
Yes, I basically think of this as an almost complete waste of time and money from a longtermist perspective (and probably neartermist perspectives too). I think that research on effective giving is particularly useless because I think that that projects differ widely in their value, and my impression is that effective giving is mostly going to get people to give to relatively bad giving opportunities.
High Impact Athletes is an EAIF grantee who I feel positive about; I am enthusiastic about them not because they might raise funds but because they might be able to get athletes to influence culture various ways (eg influencing public feelings about animal agriculture etc). And so I think it makes sense for them to initially focus on fundraising, but thatâs not where I expect most of their value to come from.
I am willing to fund orgs that attempt to just do fundraising, if their multiplier on their expenses is pretty good, because marginal money has more than zero value and Iâd rather we had twice as much money. But I think that working for such an org is unlikely to be very impactful.
At first glance the 20% figure sounded about right to me. However, when thinking a bit more about it, Iâm worried that (at least in my case) this is too anchored on imagining âbusiness as usual, but with more total capitalâ. Iâm wondering if most of the expected value of an additional $100Bâespecially when controlled by a single donor who can flexibly deploy themâcomes from âcrazyâ and somewhat unlikely-to-pan-out options. I.e., things like:
Building an âEA cityâ somewhere
Buying a majority of shares of some AI company (or of relevant hardware companies)
Being able to spend tens of billions of $ on compute, at a time when few other actors are willing to do so
Buying the New York Times
Being among the first actors settling Mars
(Tbc, I think most of these things would be kind of dumb or impossible as stated, and maybe a ârealisticâ additional donor wouldnât be open to such things. Iâm just gesturing at the rough shape of things which I suspect might contain a lot of the expected value.)
I think that âbusiness as usual but with more total capitalâ leads to way less increased impact than 20%; I am taking into account the fact that weâd need to do crazy new types of spending.
Incidentally, you canât buy the New York Times on public markets; youâd have to do a private deal with the family who runs it
.
Hmm. Then Iâm not sure I agree. When I think of prototypical example scenarios of âbusiness as usual but with more total capitalâ I kind of agree that they seem less valuable than +20%. But on the other hand, I feel like if I tried to come up with some first-principle-based âutility functionâ Iâd be surprised if it had returns than diminish much more strongly than logarithmic. (Thatâs at least my initial intuitionânot sure I could justify it.) And if it was logarithmic, going from $10B to $100B should add about as much value than going from $1B to $10B, and I feel like the former adds clearly more than 20%.
(I guess there is also the question what exactly weâre assuming. E.g., should the fact that this additional $100B donor appears also make me more optimistic about the growth and ceiling of total longtermist-aligned capital going forward? If not, i.e. if I should compare the additional $100B to the net present expected value of all longtermist capital that will ever appear, then Iâm much more inclined to agree with âbusiness as usual + this extra capital adds much less than 20%â. In this latter case, getting the $100B now might simply compress the period of growth of longtermist capital from a few years or decades to a second, or something like that.)
OK, on a second thought I think this argument doesnât work because itâs basically double-counting: the reason why returns might not diminish much faster than logarithmic may be precisely that new, âcrazyâ opportunities become available.
Hereâs a toy model:
A production function roughly along the lines of utility = funding ^ 0.2 * talent ^ 0.6 (this has diminishing returns to funding*talent, but the returns diminish slowly)
A default assumption that longtermism will eventually end up with $30-$300B in funding, letâs assume $100B
Increasing the funding from $100B to $200B would then increase utility by 15%.
Just wanted to flag briefly that I personally disagree with this:
I think that fundraising projects can be mildly helpful from a longtermist perspective if they are unusually good at directing the money really well (i.e., match or beat Open Philâs last dollar), and are truly increasing overall resources*. I think that thereâs a high chance that more financial resources wonât be helpful at all, but some small chance that they will be, so the EV is still weakly positive.
I think that fundraising projects can be moderately helpful from a neartermist perspective if they are truly increasing overall resources*.
* Some models/âcalculations that Iâve seen donât do a great job of modelling the overall ROI from fundraising. They need to take into account not just the financial cost but also the talent cost of the project (which should often be valued at rates vastly higher than are common in the private sector), the counterfactual donations /â Shapley value (the fundraising organization often doesnât deserve 100% of the credit for the money raised â some of the credit goes to the donor!), and a ~10-15% annual discount rate (this is the return I expect for smart, low-risk financial investments).
I still somewhat share Buckâs overall sentiment: I think fundraising runs the risk of being a bit of a distraction. I personally regret co-running a fundraising organization and writing a thesis paper about donation behavior. Iâd rather have spent my time learning about AI policy (or, if I was a neartermist, I might say e.g. charter cities, growth diagnostics in development economics, NTD eradication programs, or factory farming in developing countries). I would love if EAs generally spent less time worrying about money and more about recruiting talent, improving the trajectory of the community, and solving the problems on the object level.
Overall, I want to continue funding good fundraising organizations.
Iâm curious how much $s you and others think that longtermist EA has access to right now/âwill have access to in the near future. The 20% number seems like a noticeably weaker claim if longtermist EA currently has access to 100B than if we currently have access to 100M.
I actually think this is surprisingly non-straightforward. Any estimate of the net present value of total longtermist $$ will have considerable uncertainty because itâs a combination of several things, many of which are highly uncertain:
How much longtermist $$ is there now?
This is the least uncertain one. Itâs not super straightforward and requires nonpublic knowledge about the wealth and goals of some large individual donors, but Iâd be surprised if my estimate on this was off by 10x.
What will the financial returns on current longtermist $$ be before theyâre being spent?
Over long timescales, for some of that capital, this might be âonlyâ as volatile as the stock market or some other âbroadâ index.
But for some share of that capital (as well as on shorter time scale) this will be absurdly volatile. Cf. the recent fortunes some EAs have made in crypto.
How much new longtermist $$ will come in at which times in the future?
This seems highly uncertain because itâs probably very heavy-tailed. E.g., there may well be a single source that increases total capital by 2x or 10x. Naturally, predicting the timing of such a single event will be quite uncertain on a time scale of years or even decades.
What should the discount rate for longtermist $$ be?
Over the last year, someone who has thought about this quite a bit told me first that they had updated from 10% per year to 6%, and then a few months later back again. This is a difference of one order of magnitude for $$ coming in in 50 years.
What counts as longtermist $$? If, e.g., the US government started spending billions on AI safety or biosecurity, most of which goes to things that from a longtermist EA perspective are kind of but not super useful, how would that count?
I think for some narrow notion of roughly âlongtermist $$ as âalignedâ as Open Philâs longtermist potâ my 80% credence interval for the net present value is $30B - $1 trillion. Iâm super confused how to think about the upper end because the 90th percentile case is some super weird transformative AI future. Maybe I should instead say that my 50% credence interval is $20B - $200B.
Generally my view on this isnât that well considered and probably not that resilient.
Interesting, thanks.
Shouldnât your lower bound for the 50% interval be higher than for the 80% interval? Or is the second interval based on different assumptions, e.g. including/âruling out some AI stuff?
(Not sure this is an important question, given how much uncertainty there is in these numbers anyway.)
If the intervals were centeredâi.e., spanning the 10th to 90th and the 25th to 75th percentile, respectivelyâthen it should be, yes.
I could now claim that I wasnât giving centered intervals, but I think what is really going on is that my estimates are not diachronically consistent even if I make them within 1 minute of each other.
I also now think that the lower end of the 80% interval should probably be more like $5-15B.
I think we roughly agree on the direct effect of fundraising orgs, promoting effective giving, etc., from a longtermist perspective.
However, I suspect Iâm (perhaps significantly) more optimistic than you about âindirectâ effects from promoting good content and advice on effective giving, promoting it as a âsocial normâ, etc. This is roughly because of the view I state under the first key uncertainty here, i.e., I suspect that encountering effective giving can for some people be a âgatewayâ toward more impactful behaviors.
One issue is that I think the sign and absolute value of these indirect effects are not that well correlated with the proxy goals such organizations would optimize, e.g., amount of money raised. For example, Iâd guess itâs much better for these indirect effects if the org is also impressive intellectually or entrepreneurially; if it produces âevangelistsâ rather than just people whoâll start giving 1% as a âhobbyâ, are quiet about it, and otherwise donât think much about it; if it engages in higher-bandwidth interactions with some of its audience; and if, in communications it at least sometimes mentions other potentially impactful behaviors.
So, e.g., GiveWell by these lights looks much better than REG, which in turns looks much better than, say, buying Facebook ads for AMF.
(Iâm also quite uncertain about all of this. E.g., I wouldnât be shocked if after significant additional consideration I ended up thinking that the indirect effects of promoting effective givingâeven in a âgoodâ wayâwere significantly net negative.)
When I said that the EAIF and LTFF have room for more funding, I didnât mean to say âEA research is funding-constrainedâ but âI think some of the abundant EA research funding should be allocated here.â
Saying âthis particular pot has room for more fundingâ can be fully consistent with the overall ecosystem being saturated with funding.
I think it definitely helps a lot with neartermist interventions. I also think it still makes a substantial* difference in longtermism, including research â but the difference you can make through direct work is plausibly vastly greater (>10x greater).
* Substantial in the sense âif you calculate the expected impact, itâll be hugeâ, not âsubstantial relative to the EA communityâs total impact.â
Ah, good point. So is your independent impression that the very large donors (e.g., Open Phil) are making a mistake by not multiplying the total funding allocated to EAIF and LTFF by (say) a factor of 0.5-5?
(I donât think that that is a logically necessary consequence of what you said, but seems like it could be a consequence of what you said + some plausible other premises.
I ask about the very large donors specifically because things youâve said elsewhere already indicate you think smaller donors are indeed often making a mistake by not allocating more funding to EAIF and LTFF. But maybe Iâm wrong about that.)I donât think anyone has made any mistakes so far, but they would (in my view) be making a mistake if they didnât allocate more funding this year.
Edit:
Hmm, why do you think this? I donât remember having said that.
Actually I now think I was just wrong about that, sorry. I had been going off of vague memories, but when I checked your post history now to try to work out what I was remembering, I realised it may have been my memory playing weird tricks based on your donor lottery post, which actually made almost the opposite claim. Specifically, you say âFor this reason, we believe that a donor lottery is the most effective way for most smaller donors to give the majority of their donations, for those who feel comfortable with it.â
(Which implies you think that thatâs a more effective way for most smaller donors to give than giving to the EA Funds right awayârather than after winning a lottery and maybe ultimately deciding to give to the EA Funds.)
I think I may have been kind-of remembering what David Moss said as if it was your view, which is weird, since David was pushing against what you said.
Iâve now struck out that part of my comment.