Re 1: I think that the funds can maybe disburse more money (though I’m a little more bearish on this than Jonas and Max, I think). But I don’t feel very excited about increasing the amount of stuff we fund by lowering our bar; as I’ve said elsewhere on the AMA the limiting factor on a grant to me usually feels more like “is this grant so bad that it would damage things (including perhaps EA culture) in some way for me to make it” than “is this grant good enough to be worth the money”.
I think that the funds’ RFMF is only slightly real—I think that giving to the EAIF has some counterfactual impact but not very much, and the impact comes from slightly weird places. For example, I personally have access to EA funders who are basically always happy to fund things that I want them to fund. So being an EAIF fund manager doesn’t really increase my ability to direct money at promising projects that I run across. (It’s helpful to have the grant logistics people from CEA, though, which makes the EAIF grantmaking experience a bit nicer.) The advantages I get from being an EAIF fund manager are that EAIF seeks applications and so I get to make grants I wouldn’t have otherwise known about, and also that Michelle, Max, and Jonas sometimes provide useful second opinions on grants.
And so I think that if you give to the EAIF, I do slightly more good via grantmaking. But the mechanism is definitely not via me having access to more money.
Is it that they have room for more funding only for things other than supporting EA-aligned research(ers)?
I think that it will be easier to increase our grantmaking for things other than supporting EA-aligned researchers with salaries, because this is almost entirely limited by how many strong candidates there are, and it seems hard to increase this directly with active grantmaking. In contrast, I feel more optimistic about doing active grantmaking to encourage retreats for researchers etc.
Do you think increasing available funding wouldn’t help with any EA stuff, or do you just mean for increasing the amount/quality/impact of EA-aligned research(ers)?
I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.
I think that increasing available funding basically won’t help at all for causing interventions of the types you listed in your post—all of those are limited by factors other than funding.
(Non-longtermist EA is more funding constrained of course—there’s enormous amounts of RFMF in GiveWell charities, and my impression is that farm animal welfare also could absorb a bunch of money.)
Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?
Yes, I basically think of this as an almost complete waste of time and money from a longtermist perspective (and probably neartermist perspectives too). I think that research on effective giving is particularly useless because I think that that projects differ widely in their value, and my impression is that effective giving is mostly going to get people to give to relatively bad giving opportunities.
High Impact Athletes is an EAIF grantee who I feel positive about; I am enthusiastic about them not because they might raise funds but because they might be able to get athletes to influence culture various ways (eg influencing public feelings about animal agriculture etc). And so I think it makes sense for them to initially focus on fundraising, but that’s not where I expect most of their value to come from.
I am willing to fund orgs that attempt to just do fundraising, if their multiplier on their expenses is pretty good, because marginal money has more than zero value and I’d rather we had twice as much money. But I think that working for such an org is unlikely to be very impactful.
I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.
At first glance the 20% figure sounded about right to me. However, when thinking a bit more about it, I’m worried that (at least in my case) this is too anchored on imagining “business as usual, but with more total capital”. I’m wondering if most of the expected value of an additional $100B—especially when controlled by a single donor who can flexibly deploy them—comes from ‘crazy’ and somewhat unlikely-to-pan-out options. I.e., things like:
Building an “EA city” somewhere
Buying a majority of shares of some AI company (or of relevant hardware companies)
Being able to spend tens of billions of $ on compute, at a time when few other actors are willing to do so
Buying the New York Times
Being among the first actors settling Mars
(Tbc, I think most of these things would be kind of dumb or impossible as stated, and maybe a “realistic” additional donor wouldn’t be open to such things. I’m just gesturing at the rough shape of things which I suspect might contain a lot of the expected value.)
I think that “business as usual but with more total capital” leads to way less increased impact than 20%; I am taking into account the fact that we’d need to do crazy new types of spending.
Incidentally, you can’t buy the New York Times on public markets; you’d have to do a private deal with the family who runs it
Hmm. Then I’m not sure I agree. When I think of prototypical example scenarios of “business as usual but with more total capital” I kind of agree that they seem less valuable than +20%. But on the other hand, I feel like if I tried to come up with some first-principle-based ‘utility function’ I’d be surprised if it had returns than diminish much more strongly than logarithmic. (That’s at least my initial intuition—not sure I could justify it.) And if it was logarithmic, going from $10B to $100B should add about as much value than going from $1B to $10B, and I feel like the former adds clearly more than 20%.
(I guess there is also the question what exactly we’re assuming. E.g., should the fact that this additional $100B donor appears also make me more optimistic about the growth and ceiling of total longtermist-aligned capital going forward? If not, i.e. if I should compare the additional $100B to the net present expected value of all longtermist capital that will ever appear, then I’m much more inclined to agree with “business as usual + this extra capital adds much less than 20%”. In this latter case, getting the $100B now might simply compress the period of growth of longtermist capital from a few years or decades to a second, or something like that.)
OK, on a second thought I think this argument doesn’t work because it’s basically double-counting: the reason why returns might not diminish much faster than logarithmic may be precisely that new, ‘crazy’ opportunities become available.
A production function roughly along the lines of utility = funding ^ 0.2 * talent ^ 0.6 (this has diminishing returns to funding*talent, but the returns diminish slowly)
A default assumption that longtermism will eventually end up with $30-$300B in funding, let’s assume $100B
Increasing the funding from $100B to $200B would then increase utility by 15%.
> Do you disagree with the EAIF grants that were focused on causing more effective giving (e.g., through direct fundraising or through research on the psychology and promotion of effective giving)?
Yes, I basically think of this as an almost complete waste of time and money from a longtermist perspective (and probably neartermist perspectives too).
Just wanted to flag briefly that I personally disagree with this:
I think that fundraising projects can be mildly helpful from a longtermist perspective if they are unusually good at directing the money really well (i.e., match or beat Open Phil’s last dollar), and are truly increasing overall resources*. I think that there’s a high chance that more financial resources won’t be helpful at all, but some small chance that they will be, so the EV is still weakly positive.
I think that fundraising projects can be moderately helpful from a neartermist perspective if they are truly increasing overall resources*.
* Some models/calculations that I’ve seen don’t do a great job of modelling the overall ROI from fundraising. They need to take into account not just the financial cost but also the talent cost of the project (which should often be valued at rates vastly higher than are common in the private sector), the counterfactual donations / Shapley value (the fundraising organization often doesn’t deserve 100% of the credit for the money raised – some of the credit goes to the donor!), and a ~10-15% annual discount rate (this is the return I expect for smart, low-risk financial investments).
I still somewhat share Buck’s overall sentiment: I think fundraising runs the risk of being a bit of a distraction. I personally regret co-running a fundraising organization and writing a thesis paper about donation behavior. I’d rather have spent my time learning about AI policy (or, if I was a neartermist, I might say e.g. charter cities, growth diagnostics in development economics, NTD eradication programs, or factory farming in developing countries). I would love if EAs generally spent less time worrying about money and more about recruiting talent, improving the trajectory of the community, and solving the problems on the object level.
Overall, I want to continue funding good fundraising organizations.
I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.
I’m curious how much $s you and others think that longtermist EA has access to right now/will have access to in the near future. The 20% number seems like a noticeably weaker claim if longtermist EA currently has access to 100B than if we currently have access to 100M.
I actually think this is surprisingly non-straightforward. Any estimate of the net present value of total longtermist $$ will have considerable uncertainty because it’s a combination of several things, many of which are highly uncertain:
How much longtermist $$ is there now?
This is the least uncertain one. It’s not super straightforward and requires nonpublic knowledge about the wealth and goals of some large individual donors, but I’d be surprised if my estimate on this was off by 10x.
What will the financial returns on current longtermist $$ be before they’re being spent?
Over long timescales, for some of that capital, this might be ‘only’ as volatile as the stock market or some other ‘broad’ index.
But for some share of that capital (as well as on shorter time scale) this will be absurdly volatile. Cf. the recent fortunes some EAs have made in crypto.
How much new longtermist $$ will come in at which times in the future?
This seems highly uncertain because it’s probably very heavy-tailed. E.g., there may well be a single source that increases total capital by 2x or 10x. Naturally, predicting the timing of such a single event will be quite uncertain on a time scale of years or even decades.
What should the discount rate for longtermist $$ be?
Over the last year, someone who has thought about this quite a bit told me first that they had updated from 10% per year to 6%, and then a few months later back again. This is a difference of one order of magnitude for $$ coming in in 50 years.
What counts as longtermist $$? If, e.g., the US government started spending billions on AI safety or biosecurity, most of which goes to things that from a longtermist EA perspective are kind of but not super useful, how would that count?
I think for some narrow notion of roughly “longtermist $$ as ‘aligned’ as Open Phil’s longtermist pot” my 80% credence interval for the net present value is $30B - $1 trillion. I’m super confused how to think about the upper end because the 90th percentile case is some super weird transformative AI future. Maybe I should instead say that my 50% credence interval is $20B - $200B.
Generally my view on this isn’t that well considered and probably not that resilient.
… my 80% credence interval for the net present value is $30B - $1 trillion. I’m super confused how to think about the upper end because the 90th percentile case is some super weird transformative AI future. Maybe I should instead say that my 50% credence interval is $20B - $200B.′ [emphases added]
Shouldn’t your lower bound for the 50% interval be higher than for the 80% interval? Or is the second interval based on different assumptions, e.g. including/ruling out some AI stuff?
(Not sure this is an important question, given how much uncertainty there is in these numbers anyway.)
Shouldn’t your lower bound for the 50% interval be higher than for the 80% interval?
If the intervals were centered—i.e., spanning the 10th to 90th and the 25th to 75th percentile, respectively—then it should be, yes.
I could now claim that I wasn’t giving centered intervals, but I think what is really going on is that my estimates are not diachronically consistent even if I make them within 1 minute of each other.
I think we roughly agree on the direct effect of fundraising orgs, promoting effective giving, etc., from a longtermist perspective.
However, I suspect I’m (perhaps significantly) more optimistic than you about ‘indirect’ effects from promoting good content and advice on effective giving, promoting it as a ‘social norm’, etc. This is roughly because of the view I state under the first key uncertainty here, i.e., I suspect that encountering effective giving can for some people be a ‘gateway’ toward more impactful behaviors.
One issue is that I think the sign and absolute value of these indirect effects are not that well correlated with the proxy goals such organizations would optimize, e.g., amount of money raised. For example, I’d guess it’s much better for these indirect effects if the org is also impressive intellectually or entrepreneurially; if it produces “evangelists” rather than just people who’ll start giving 1% as a ‘hobby’, are quiet about it, and otherwise don’t think much about it; if it engages in higher-bandwidth interactions with some of its audience; and if, in communications it at least sometimes mentions other potentially impactful behaviors.
So, e.g., GiveWell by these lights looks much better than REG, which in turns looks much better than, say, buying Facebook ads for AMF.
(I’m also quite uncertain about all of this. E.g., I wouldn’t be shocked if after significant additional consideration I ended up thinking that the indirect effects of promoting effective giving—even in a ‘good’ way—were significantly net negative.)
Re 1: I think that the funds can maybe disburse more money (though I’m a little more bearish on this than Jonas and Max, I think). But I don’t feel very excited about increasing the amount of stuff we fund by lowering our bar; as I’ve said elsewhere on the AMA the limiting factor on a grant to me usually feels more like “is this grant so bad that it would damage things (including perhaps EA culture) in some way for me to make it” than “is this grant good enough to be worth the money”.
I think that the funds’ RFMF is only slightly real—I think that giving to the EAIF has some counterfactual impact but not very much, and the impact comes from slightly weird places. For example, I personally have access to EA funders who are basically always happy to fund things that I want them to fund. So being an EAIF fund manager doesn’t really increase my ability to direct money at promising projects that I run across. (It’s helpful to have the grant logistics people from CEA, though, which makes the EAIF grantmaking experience a bit nicer.) The advantages I get from being an EAIF fund manager are that EAIF seeks applications and so I get to make grants I wouldn’t have otherwise known about, and also that Michelle, Max, and Jonas sometimes provide useful second opinions on grants.
And so I think that if you give to the EAIF, I do slightly more good via grantmaking. But the mechanism is definitely not via me having access to more money.
I think that it will be easier to increase our grantmaking for things other than supporting EA-aligned researchers with salaries, because this is almost entirely limited by how many strong candidates there are, and it seems hard to increase this directly with active grantmaking. In contrast, I feel more optimistic about doing active grantmaking to encourage retreats for researchers etc.
I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.
I think that increasing available funding basically won’t help at all for causing interventions of the types you listed in your post—all of those are limited by factors other than funding.
(Non-longtermist EA is more funding constrained of course—there’s enormous amounts of RFMF in GiveWell charities, and my impression is that farm animal welfare also could absorb a bunch of money.)
Yes, I basically think of this as an almost complete waste of time and money from a longtermist perspective (and probably neartermist perspectives too). I think that research on effective giving is particularly useless because I think that that projects differ widely in their value, and my impression is that effective giving is mostly going to get people to give to relatively bad giving opportunities.
High Impact Athletes is an EAIF grantee who I feel positive about; I am enthusiastic about them not because they might raise funds but because they might be able to get athletes to influence culture various ways (eg influencing public feelings about animal agriculture etc). And so I think it makes sense for them to initially focus on fundraising, but that’s not where I expect most of their value to come from.
I am willing to fund orgs that attempt to just do fundraising, if their multiplier on their expenses is pretty good, because marginal money has more than zero value and I’d rather we had twice as much money. But I think that working for such an org is unlikely to be very impactful.
At first glance the 20% figure sounded about right to me. However, when thinking a bit more about it, I’m worried that (at least in my case) this is too anchored on imagining “business as usual, but with more total capital”. I’m wondering if most of the expected value of an additional $100B—especially when controlled by a single donor who can flexibly deploy them—comes from ‘crazy’ and somewhat unlikely-to-pan-out options. I.e., things like:
Building an “EA city” somewhere
Buying a majority of shares of some AI company (or of relevant hardware companies)
Being able to spend tens of billions of $ on compute, at a time when few other actors are willing to do so
Buying the New York Times
Being among the first actors settling Mars
(Tbc, I think most of these things would be kind of dumb or impossible as stated, and maybe a “realistic” additional donor wouldn’t be open to such things. I’m just gesturing at the rough shape of things which I suspect might contain a lot of the expected value.)
I think that “business as usual but with more total capital” leads to way less increased impact than 20%; I am taking into account the fact that we’d need to do crazy new types of spending.
Incidentally, you can’t buy the New York Times on public markets; you’d have to do a private deal with the family who runs it
.
Hmm. Then I’m not sure I agree. When I think of prototypical example scenarios of “business as usual but with more total capital” I kind of agree that they seem less valuable than +20%. But on the other hand, I feel like if I tried to come up with some first-principle-based ‘utility function’ I’d be surprised if it had returns than diminish much more strongly than logarithmic. (That’s at least my initial intuition—not sure I could justify it.) And if it was logarithmic, going from $10B to $100B should add about as much value than going from $1B to $10B, and I feel like the former adds clearly more than 20%.
(I guess there is also the question what exactly we’re assuming. E.g., should the fact that this additional $100B donor appears also make me more optimistic about the growth and ceiling of total longtermist-aligned capital going forward? If not, i.e. if I should compare the additional $100B to the net present expected value of all longtermist capital that will ever appear, then I’m much more inclined to agree with “business as usual + this extra capital adds much less than 20%”. In this latter case, getting the $100B now might simply compress the period of growth of longtermist capital from a few years or decades to a second, or something like that.)
OK, on a second thought I think this argument doesn’t work because it’s basically double-counting: the reason why returns might not diminish much faster than logarithmic may be precisely that new, ‘crazy’ opportunities become available.
Here’s a toy model:
A production function roughly along the lines of utility = funding ^ 0.2 * talent ^ 0.6 (this has diminishing returns to funding*talent, but the returns diminish slowly)
A default assumption that longtermism will eventually end up with $30-$300B in funding, let’s assume $100B
Increasing the funding from $100B to $200B would then increase utility by 15%.
Just wanted to flag briefly that I personally disagree with this:
I think that fundraising projects can be mildly helpful from a longtermist perspective if they are unusually good at directing the money really well (i.e., match or beat Open Phil’s last dollar), and are truly increasing overall resources*. I think that there’s a high chance that more financial resources won’t be helpful at all, but some small chance that they will be, so the EV is still weakly positive.
I think that fundraising projects can be moderately helpful from a neartermist perspective if they are truly increasing overall resources*.
* Some models/calculations that I’ve seen don’t do a great job of modelling the overall ROI from fundraising. They need to take into account not just the financial cost but also the talent cost of the project (which should often be valued at rates vastly higher than are common in the private sector), the counterfactual donations / Shapley value (the fundraising organization often doesn’t deserve 100% of the credit for the money raised – some of the credit goes to the donor!), and a ~10-15% annual discount rate (this is the return I expect for smart, low-risk financial investments).
I still somewhat share Buck’s overall sentiment: I think fundraising runs the risk of being a bit of a distraction. I personally regret co-running a fundraising organization and writing a thesis paper about donation behavior. I’d rather have spent my time learning about AI policy (or, if I was a neartermist, I might say e.g. charter cities, growth diagnostics in development economics, NTD eradication programs, or factory farming in developing countries). I would love if EAs generally spent less time worrying about money and more about recruiting talent, improving the trajectory of the community, and solving the problems on the object level.
Overall, I want to continue funding good fundraising organizations.
I’m curious how much $s you and others think that longtermist EA has access to right now/will have access to in the near future. The 20% number seems like a noticeably weaker claim if longtermist EA currently has access to 100B than if we currently have access to 100M.
I actually think this is surprisingly non-straightforward. Any estimate of the net present value of total longtermist $$ will have considerable uncertainty because it’s a combination of several things, many of which are highly uncertain:
How much longtermist $$ is there now?
This is the least uncertain one. It’s not super straightforward and requires nonpublic knowledge about the wealth and goals of some large individual donors, but I’d be surprised if my estimate on this was off by 10x.
What will the financial returns on current longtermist $$ be before they’re being spent?
Over long timescales, for some of that capital, this might be ‘only’ as volatile as the stock market or some other ‘broad’ index.
But for some share of that capital (as well as on shorter time scale) this will be absurdly volatile. Cf. the recent fortunes some EAs have made in crypto.
How much new longtermist $$ will come in at which times in the future?
This seems highly uncertain because it’s probably very heavy-tailed. E.g., there may well be a single source that increases total capital by 2x or 10x. Naturally, predicting the timing of such a single event will be quite uncertain on a time scale of years or even decades.
What should the discount rate for longtermist $$ be?
Over the last year, someone who has thought about this quite a bit told me first that they had updated from 10% per year to 6%, and then a few months later back again. This is a difference of one order of magnitude for $$ coming in in 50 years.
What counts as longtermist $$? If, e.g., the US government started spending billions on AI safety or biosecurity, most of which goes to things that from a longtermist EA perspective are kind of but not super useful, how would that count?
I think for some narrow notion of roughly “longtermist $$ as ‘aligned’ as Open Phil’s longtermist pot” my 80% credence interval for the net present value is $30B - $1 trillion. I’m super confused how to think about the upper end because the 90th percentile case is some super weird transformative AI future. Maybe I should instead say that my 50% credence interval is $20B - $200B.
Generally my view on this isn’t that well considered and probably not that resilient.
Interesting, thanks.
Shouldn’t your lower bound for the 50% interval be higher than for the 80% interval? Or is the second interval based on different assumptions, e.g. including/ruling out some AI stuff?
(Not sure this is an important question, given how much uncertainty there is in these numbers anyway.)
If the intervals were centered—i.e., spanning the 10th to 90th and the 25th to 75th percentile, respectively—then it should be, yes.
I could now claim that I wasn’t giving centered intervals, but I think what is really going on is that my estimates are not diachronically consistent even if I make them within 1 minute of each other.
I also now think that the lower end of the 80% interval should probably be more like $5-15B.
I think we roughly agree on the direct effect of fundraising orgs, promoting effective giving, etc., from a longtermist perspective.
However, I suspect I’m (perhaps significantly) more optimistic than you about ‘indirect’ effects from promoting good content and advice on effective giving, promoting it as a ‘social norm’, etc. This is roughly because of the view I state under the first key uncertainty here, i.e., I suspect that encountering effective giving can for some people be a ‘gateway’ toward more impactful behaviors.
One issue is that I think the sign and absolute value of these indirect effects are not that well correlated with the proxy goals such organizations would optimize, e.g., amount of money raised. For example, I’d guess it’s much better for these indirect effects if the org is also impressive intellectually or entrepreneurially; if it produces “evangelists” rather than just people who’ll start giving 1% as a ‘hobby’, are quiet about it, and otherwise don’t think much about it; if it engages in higher-bandwidth interactions with some of its audience; and if, in communications it at least sometimes mentions other potentially impactful behaviors.
So, e.g., GiveWell by these lights looks much better than REG, which in turns looks much better than, say, buying Facebook ads for AMF.
(I’m also quite uncertain about all of this. E.g., I wouldn’t be shocked if after significant additional consideration I ended up thinking that the indirect effects of promoting effective giving—even in a ‘good’ way—were significantly net negative.)