I think that if a new donor appeared and increased the amount of funding available to longtermism by $100B, this would maybe increase the total value of longtermist EA by 20%.
At first glance the 20% figure sounded about right to me. However, when thinking a bit more about it, I’m worried that (at least in my case) this is too anchored on imagining “business as usual, but with more total capital”. I’m wondering if most of the expected value of an additional $100B—especially when controlled by a single donor who can flexibly deploy them—comes from ‘crazy’ and somewhat unlikely-to-pan-out options. I.e., things like:
Building an “EA city” somewhere
Buying a majority of shares of some AI company (or of relevant hardware companies)
Being able to spend tens of billions of $ on compute, at a time when few other actors are willing to do so
Buying the New York Times
Being among the first actors settling Mars
(Tbc, I think most of these things would be kind of dumb or impossible as stated, and maybe a “realistic” additional donor wouldn’t be open to such things. I’m just gesturing at the rough shape of things which I suspect might contain a lot of the expected value.)
I think that “business as usual but with more total capital” leads to way less increased impact than 20%; I am taking into account the fact that we’d need to do crazy new types of spending.
Incidentally, you can’t buy the New York Times on public markets; you’d have to do a private deal with the family who runs it
Hmm. Then I’m not sure I agree. When I think of prototypical example scenarios of “business as usual but with more total capital” I kind of agree that they seem less valuable than +20%. But on the other hand, I feel like if I tried to come up with some first-principle-based ‘utility function’ I’d be surprised if it had returns than diminish much more strongly than logarithmic. (That’s at least my initial intuition—not sure I could justify it.) And if it was logarithmic, going from $10B to $100B should add about as much value than going from $1B to $10B, and I feel like the former adds clearly more than 20%.
(I guess there is also the question what exactly we’re assuming. E.g., should the fact that this additional $100B donor appears also make me more optimistic about the growth and ceiling of total longtermist-aligned capital going forward? If not, i.e. if I should compare the additional $100B to the net present expected value of all longtermist capital that will ever appear, then I’m much more inclined to agree with “business as usual + this extra capital adds much less than 20%”. In this latter case, getting the $100B now might simply compress the period of growth of longtermist capital from a few years or decades to a second, or something like that.)
OK, on a second thought I think this argument doesn’t work because it’s basically double-counting: the reason why returns might not diminish much faster than logarithmic may be precisely that new, ‘crazy’ opportunities become available.
A production function roughly along the lines of utility = funding ^ 0.2 * talent ^ 0.6 (this has diminishing returns to funding*talent, but the returns diminish slowly)
A default assumption that longtermism will eventually end up with $30-$300B in funding, let’s assume $100B
Increasing the funding from $100B to $200B would then increase utility by 15%.
At first glance the 20% figure sounded about right to me. However, when thinking a bit more about it, I’m worried that (at least in my case) this is too anchored on imagining “business as usual, but with more total capital”. I’m wondering if most of the expected value of an additional $100B—especially when controlled by a single donor who can flexibly deploy them—comes from ‘crazy’ and somewhat unlikely-to-pan-out options. I.e., things like:
Building an “EA city” somewhere
Buying a majority of shares of some AI company (or of relevant hardware companies)
Being able to spend tens of billions of $ on compute, at a time when few other actors are willing to do so
Buying the New York Times
Being among the first actors settling Mars
(Tbc, I think most of these things would be kind of dumb or impossible as stated, and maybe a “realistic” additional donor wouldn’t be open to such things. I’m just gesturing at the rough shape of things which I suspect might contain a lot of the expected value.)
I think that “business as usual but with more total capital” leads to way less increased impact than 20%; I am taking into account the fact that we’d need to do crazy new types of spending.
Incidentally, you can’t buy the New York Times on public markets; you’d have to do a private deal with the family who runs it
.
Hmm. Then I’m not sure I agree. When I think of prototypical example scenarios of “business as usual but with more total capital” I kind of agree that they seem less valuable than +20%. But on the other hand, I feel like if I tried to come up with some first-principle-based ‘utility function’ I’d be surprised if it had returns than diminish much more strongly than logarithmic. (That’s at least my initial intuition—not sure I could justify it.) And if it was logarithmic, going from $10B to $100B should add about as much value than going from $1B to $10B, and I feel like the former adds clearly more than 20%.
(I guess there is also the question what exactly we’re assuming. E.g., should the fact that this additional $100B donor appears also make me more optimistic about the growth and ceiling of total longtermist-aligned capital going forward? If not, i.e. if I should compare the additional $100B to the net present expected value of all longtermist capital that will ever appear, then I’m much more inclined to agree with “business as usual + this extra capital adds much less than 20%”. In this latter case, getting the $100B now might simply compress the period of growth of longtermist capital from a few years or decades to a second, or something like that.)
OK, on a second thought I think this argument doesn’t work because it’s basically double-counting: the reason why returns might not diminish much faster than logarithmic may be precisely that new, ‘crazy’ opportunities become available.
Here’s a toy model:
A production function roughly along the lines of utility = funding ^ 0.2 * talent ^ 0.6 (this has diminishing returns to funding*talent, but the returns diminish slowly)
A default assumption that longtermism will eventually end up with $30-$300B in funding, let’s assume $100B
Increasing the funding from $100B to $200B would then increase utility by 15%.