While it’s hard to disagree with the math, would it not be fairly unlikely for the current allocation of resources to be close enough to the actual allocation of resources that this would realistically lead to allocating an agent’s resources to more than one cause area? Like you mention, the allocation within the community-building cause area itself is one of the more likely candidates, as we have a large piece of the pie in our hands (if not all of it). However, the community is not one agent, so we would need to funnel the money through e.g. EA Funds, correct?
Alternatively, there could be top-level analysis of what the distribution -ought- to be, and what it -currently is-, and suggest people donate to close that gap. But is this really different from arguments in terms of marginal impact and neglectedness? I agree your line of thinking ought to be followed in such analysis, but am not convinced that this isn’t incorporated already.
It also doesn’t solve issues like Sam Bankman-Fried mentioned where according to some argument one cause area is 44 orders of magnitude more impactful, because even if the two causes are multiplicative, if I understand correctly this would imply a resource allocation of 1:10^44, which is effectively the same as going all in on the large cause area. I think that even in less extreme cases than this, we should actually be far more “egalitarian” in our distribution of resources than multiplicative causes (and especially additive causes) suggest, as statistically speaking, the higher the expected value of a cause area is, the more likely that it is overestimated.
I do think this is a useful framework on a smaller scale. E.g. your example of focusing on new talent or improving existing talent within the EA community. For local communities where a small group of agents plays a determining role on where the focus lies, this can be applied much more easily than in global cause area resource allocations.
I address the points you mention in my response to Carl.
It also doesn’t solve issues like Sam Bankman-Fried mentioned where according to some argument one cause area is 44 orders of magnitude more impactful, because even if the two causes are multiplicative, if I understand correctly this would imply a resource allocation of 1:10^44, which is effectively the same as going all in on the large cause area.
I don’t think this is understanding the issue correctly, but it’s hard to say since I am a bit confused what you mean by ‘more impactful’ in the context of multiplying variables. Could you give an example?
I guess when I say “more impactful” I mean “higher output elasticity”.
We can go with the example of x-risk vs poverty reduction (as mentioned by Carl as well). If we were to think that allocating resources to reduce x-risk has an output elasticity 100,000 higher than poverty reduction, but reducing poverty improves the future, and reducing x-risk makes reducing poverty more valuable, then you ought to handle them multiplicatively instead of additively, like you said.
If you’d have 100,001 resources to spend, that’d mean 100,000 units against x-risk and 1 unit for poverty reduction, as opposed to the 100,001 for x-risk and 0 for poverty reduction when looking at them independently(/additively). Sam implies the additive reasoning in such situations is erroneous, after mentioning an example with such a massive discrepancy in elasticity. I’m pointing out that this does not seem to really make a difference in such cases, because even with proportional allocation it is effectively the same as going all in on (in this example) x-risk.
Anyway, not claiming that this makes the multiplicative approach incorrect (or rather, less correct than additive), just saying that in this case which is mentioned as one of the motivations for this, it really doesn’t make much of a difference (though things like diminishing returns would). Maybe this would have been more fitting as a reply to Sam than you, though!
What you’re saying is correct if you’re assuming that so far zero resources have been spent on x-risk reduction and global poverty. (Though that isn’t quite right either: You can’t compute an output elasticity if you have to divide by 0.)
But you are supposed to compare the ideal output elasticity ratio with how resources are being spent currently, those ratios are supposed to be equal locally. So using your example, if there were currently more than 1mil times as many resources spent on x-risk than global poverty, global poverty should be prioritised.
When I was running the numbers, my impression was that global wellbeing increases had a much bigger output elasticity than x-risk reduction. I found it a bit tricky to find numbers for global (not just EA) x-risk reduction efforts, so I’m not confident and also not confident how large the gap in resource spending is. 80k quotes $500 billion per year for resources spent on global wellbeing increases.
While it’s hard to disagree with the math, would it not be fairly unlikely for the current allocation of resources to be close enough to the actual allocation of resources that this would realistically lead to allocating an agent’s resources to more than one cause area? Like you mention, the allocation within the community-building cause area itself is one of the more likely candidates, as we have a large piece of the pie in our hands (if not all of it). However, the community is not one agent, so we would need to funnel the money through e.g. EA Funds, correct?
Alternatively, there could be top-level analysis of what the distribution -ought- to be, and what it -currently is-, and suggest people donate to close that gap. But is this really different from arguments in terms of marginal impact and neglectedness? I agree your line of thinking ought to be followed in such analysis, but am not convinced that this isn’t incorporated already.
It also doesn’t solve issues like Sam Bankman-Fried mentioned where according to some argument one cause area is 44 orders of magnitude more impactful, because even if the two causes are multiplicative, if I understand correctly this would imply a resource allocation of 1:10^44, which is effectively the same as going all in on the large cause area. I think that even in less extreme cases than this, we should actually be far more “egalitarian” in our distribution of resources than multiplicative causes (and especially additive causes) suggest, as statistically speaking, the higher the expected value of a cause area is, the more likely that it is overestimated.
I do think this is a useful framework on a smaller scale. E.g. your example of focusing on new talent or improving existing talent within the EA community. For local communities where a small group of agents plays a determining role on where the focus lies, this can be applied much more easily than in global cause area resource allocations.
I address the points you mention in my response to Carl.
I don’t think this is understanding the issue correctly, but it’s hard to say since I am a bit confused what you mean by ‘more impactful’ in the context of multiplying variables. Could you give an example?
I guess when I say “more impactful” I mean “higher output elasticity”.
We can go with the example of x-risk vs poverty reduction (as mentioned by Carl as well). If we were to think that allocating resources to reduce x-risk has an output elasticity 100,000 higher than poverty reduction, but reducing poverty improves the future, and reducing x-risk makes reducing poverty more valuable, then you ought to handle them multiplicatively instead of additively, like you said.
If you’d have 100,001 resources to spend, that’d mean 100,000 units against x-risk and 1 unit for poverty reduction, as opposed to the 100,001 for x-risk and 0 for poverty reduction when looking at them independently(/additively). Sam implies the additive reasoning in such situations is erroneous, after mentioning an example with such a massive discrepancy in elasticity. I’m pointing out that this does not seem to really make a difference in such cases, because even with proportional allocation it is effectively the same as going all in on (in this example) x-risk.
Anyway, not claiming that this makes the multiplicative approach incorrect (or rather, less correct than additive), just saying that in this case which is mentioned as one of the motivations for this, it really doesn’t make much of a difference (though things like diminishing returns would). Maybe this would have been more fitting as a reply to Sam than you, though!
What you’re saying is correct if you’re assuming that so far zero resources have been spent on x-risk reduction and global poverty. (Though that isn’t quite right either: You can’t compute an output elasticity if you have to divide by 0.)
But you are supposed to compare the ideal output elasticity ratio with how resources are being spent currently, those ratios are supposed to be equal locally. So using your example, if there were currently more than 1mil times as many resources spent on x-risk than global poverty, global poverty should be prioritised.
When I was running the numbers, my impression was that global wellbeing increases had a much bigger output elasticity than x-risk reduction. I found it a bit tricky to find numbers for global (not just EA) x-risk reduction efforts, so I’m not confident and also not confident how large the gap in resource spending is. 80k quotes $500 billion per year for resources spent on global wellbeing increases.