(a) It seems likely that even in a wildly successful case of EA going more mainstream Impact List could only take a fraction of the credit for that. E.g. If 10 years from now the total amount of money committed to EA (in 2022 dollars) increased from its current ~$40B to ~$400B, I’d probably only assign about 10% or so of the credit for that growth to a $1M/year (2022 dollars) Impact List project, even in the case where it seemed like Impact List played a large role. So that’s maybe $36B or so of donations the $10M investment in Impact List can take credit for.
(b) When we’re talking hundreds of billions of dollars, there’s significant diminishing marginal value of the money being committed to EA. So turn the $36B into $10B or something (not sure the appropriate discount). Then we’re talking a 0.1%-1% chance of that. So that’s $10M-$100M of value.
If a good team can be assembled, it does seem worth funding to me, but it doesn’t seem as clear-cut as your estimate suggests.
Regarding (a), it doesn’t seem clear to me that conditional on Impact List being wildly successful (which I’m interpreting as roughly the $110B over ten years case), we shouldn’t expect it to account for more than 10% of overall EA outreach impact. Conditional on Impact List accounting for $110B, I don’t think I’d feel surprised to learn that EA controls only $400B (or even $200B) instead of ~$1T. Can you say more about why that would be surprising?
(I do think there’s a ~5% chance that EA controls or has deployed $1T within ten years.)
I think (b) is a legit argument in general, although I have a lot of uncertainty about what the appropriate discount should be. This is also highlighting that using dollars for impact can be unclear, and that my EV calculation bucketed money as either ‘ineffective’ or ‘effective’ without spelling out the implications.
A few implications of that:
There’s a ‘free parameter’ in the EV calculation that isn’t obvious: the threshold we use to separate effective from ineffective donations. We might pick something like ‘effective = anything roughly as or more effective than current GiveWell top charities’.
That threshold influences whether our probability estimates are reasonable. For instance depending on this threshold someone can object “A 1 in 1000 chance for $110B to be moved to things as effective as GiveDirectly seems reasonable, but 1 in 1000 for $110B to be moved to things as effective as AMF? No way!”
As noted in footnote 4, we assume that the donations in the ‘ineffective’ bucket are so much less effective than the donations in the ‘effective’ bucket that we can ignore them. Alternatively, we can assume that enough of the ‘effective’ donations are far enough above the minimum effectiveness threshold that they at least cancel out all the ineffective donations.
The threshold we pick also determines what it means when we talk about expected value. If we say the expected value of Impact List is $X it means roughly $X being put into things at at least the level of effectiveness of our threshold. We could be underestimating if Impact List causes people to donate a lot to ultra-effective orgs (and it might, if people try hard to optimize their rankings), but I didn’t try to model that.
Given the bucketing and that “$X of value” doesn’t mean “$X put into the most effective cause area”, I think it may be reasonable to not have a discount. Not having a discount assumes that we’ll find enough (or scalable enough) cause areas over the next ten years at least as effective as whatever threshold value we pick that they can soak up an extra ~110B. Although this is probably a lot more plausible to those who prioritize x-risk than to those who think global health will be the top cause area over that period.
Value of information of initial investments in the project. If it’s not looking good after a year, the project can be abandoned when <<$10M has been spent.
80⁄20 rule: It could influence one person to become the new top EA funder, and this could represent a majority of the money moved to high-cost-effectiveness philanthropy.
It could positively influence the trajectory of EA giving, such that capping the influence at 10 years doesn’t capture a lot of the value. E.g. Some person who is a child now becomes the next SBF in another 10-20 years, in part due to the impact the list has on the culture of giving.
Your estimate seems optimistic to me because:
(a) It seems likely that even in a wildly successful case of EA going more mainstream Impact List could only take a fraction of the credit for that. E.g. If 10 years from now the total amount of money committed to EA (in 2022 dollars) increased from its current ~$40B to ~$400B, I’d probably only assign about 10% or so of the credit for that growth to a $1M/year (2022 dollars) Impact List project, even in the case where it seemed like Impact List played a large role. So that’s maybe $36B or so of donations the $10M investment in Impact List can take credit for.
(b) When we’re talking hundreds of billions of dollars, there’s significant diminishing marginal value of the money being committed to EA. So turn the $36B into $10B or something (not sure the appropriate discount). Then we’re talking a 0.1%-1% chance of that. So that’s $10M-$100M of value.
If a good team can be assembled, it does seem worth funding to me, but it doesn’t seem as clear-cut as your estimate suggests.
Thanks for the feedback!
Regarding (a), it doesn’t seem clear to me that conditional on Impact List being wildly successful (which I’m interpreting as roughly the $110B over ten years case), we shouldn’t expect it to account for more than 10% of overall EA outreach impact. Conditional on Impact List accounting for $110B, I don’t think I’d feel surprised to learn that EA controls only $400B (or even $200B) instead of ~$1T. Can you say more about why that would be surprising?
(I do think there’s a ~5% chance that EA controls or has deployed $1T within ten years.)
I think (b) is a legit argument in general, although I have a lot of uncertainty about what the appropriate discount should be. This is also highlighting that using dollars for impact can be unclear, and that my EV calculation bucketed money as either ‘ineffective’ or ‘effective’ without spelling out the implications.
A few implications of that:
There’s a ‘free parameter’ in the EV calculation that isn’t obvious: the threshold we use to separate effective from ineffective donations. We might pick something like ‘effective = anything roughly as or more effective than current GiveWell top charities’.
That threshold influences whether our probability estimates are reasonable. For instance depending on this threshold someone can object “A 1 in 1000 chance for $110B to be moved to things as effective as GiveDirectly seems reasonable, but 1 in 1000 for $110B to be moved to things as effective as AMF? No way!”
As noted in footnote 4, we assume that the donations in the ‘ineffective’ bucket are so much less effective than the donations in the ‘effective’ bucket that we can ignore them. Alternatively, we can assume that enough of the ‘effective’ donations are far enough above the minimum effectiveness threshold that they at least cancel out all the ineffective donations.
The threshold we pick also determines what it means when we talk about expected value. If we say the expected value of Impact List is $X it means roughly $X being put into things at at least the level of effectiveness of our threshold. We could be underestimating if Impact List causes people to donate a lot to ultra-effective orgs (and it might, if people try hard to optimize their rankings), but I didn’t try to model that.
Given the bucketing and that “$X of value” doesn’t mean “$X put into the most effective cause area”, I think it may be reasonable to not have a discount. Not having a discount assumes that we’ll find enough (or scalable enough) cause areas over the next ten years at least as effective as whatever threshold value we pick that they can soak up an extra ~110B. Although this is probably a lot more plausible to those who prioritize x-risk than to those who think global health will be the top cause area over that period.
Considerations in the opposite direction:
Value of information of initial investments in the project. If it’s not looking good after a year, the project can be abandoned when <<$10M has been spent.
80⁄20 rule: It could influence one person to become the new top EA funder, and this could represent a majority of the money moved to high-cost-effectiveness philanthropy.
It could positively influence the trajectory of EA giving, such that capping the influence at 10 years doesn’t capture a lot of the value. E.g. Some person who is a child now becomes the next SBF in another 10-20 years, in part due to the impact the list has on the culture of giving.