While I totally agree with the the conclusion of the post (the community should have a portfolio of causes, and not invest everything in the top cause), I feel very unsure that a lot of these reasons are good ones for spreading out from the most promising cause.
Or if they do imply spreading out, they don’t obviously justify the standard EA alternatives to AI Risk.
I noticed I felt like I was disagreeing with your reasons for not doing argmax throughout the post, and this list helped to explain why.
1. Starting with VOI, that assumes that you can get significant information about how good a cause is by having people work on it. In practice, a ton of uncertainty is about scale and neglectedness, and having people work on the cause doesn’t tell you much about that. Global priorities research usually seems more useful.
VOI would also imply working on causes that might be top, but that we’re very uncertain about. So, for example, that probably wouldn’t imply that that longtermist-interested people should work on global health or factory farming, but rather spread out over lots of weirder small causes, like those listed here: https://80000hours.org/problem-profiles/#less-developed-areas
2. “You don’t know the whole option set” sounds like a similar issue to VOI. It would imply trying to go and explore totally new areas, rather than working on familiar EA priorities.
3. Many approaches to moral uncertainty suggest that you factor in uncertainty in your choice of values, but then you just choose the best option with respect to those values. It doesn’t obviously suggest supporting multiple causes.
4. Concave altruism. Personally I think there are increasing returns on the level of orgs, but I don’t think there are significant increasing returns at the level of cause areas. (And that post is more about exploring the implications of concave altruism rather than making the case it actually applies to EA cause selection.)
5. Optimizer’s curse. This seems like a reason to think your best guess isn’t as good as you think, rather than to support multiple causes.
6. Worldview diversification. This isn’t really an independent reason to spread out – it’s just the name of Open Phil’s approach to spreading out (which they believe for other reasons).
7. Risk aversion. I don’t think we should be risk averse about utility, so agree with your low ranking of it.
8. Strategic skullduggery. This actually seems like one of the clearest reasons to spread out..
9. Decreased variance. I agree with you this is probably not a big factor.
You didn’t add diminishing returns to your list, though I think you’d rank it near the top. I’d also agree it’s a factor, though I also think it’s often oversold. E.g. if there are short-term bottlenecks in AI that create diminishing returns, it’s likely the best response is to invest in career capital and wait for the bottlenecks to disappear, rather than to switch into a totally different cause. You also need big increases in resources to get enough diminishing returns to change the cause ranking e.g. if you think AI safety is 10x as effective as pandemics at the margin, you might need to see the AI safety community roughly 10x in size relative to biosecurity before they’d equalise.
I tried to summarise what I think the good reasons for spreading out are here.
For a longtermist, I think those considerations would suggest a picture like:
50% into the top 1-3 issues
20% into the next couple of issues
20% into exploring a wide range of issues that might be top
10% into other popular issues
If I had to list a single biggest driver, it would be personal fit / idiosyncratic opportunities, which can easily produce orders of magnitude differences in what different people should focus on.
The question of how to factor in neartermism (or other alternatives to AI-focused longtermism) seems harder. It could easily imply still betting everything on AI, though putting some % of resources into neartermism in proportion to your credence in it also seems sensible.
3. Tarsney suggests one other plausible reason moral uncertainty is relevant: nonunique solutions leaving some choices undetermined. But I’m not clear on this.
Yes, wasn’t trying to endorse all of those (and should have put numbers on their dodginess).
1. Interesting. I disagree for now but would love to see what persuaded you of this. Fully agree that softmax implies long shots.
2. Yes, new causes and also new interventions within causes.
3. Yes, I really should have expanded this, but was lazy / didn’t want to disturb the pleasant brevity. It’s only “moral” uncertainty about how much risk aversion you should have that changes anything. (à la this.)
4. Agree.
5. Agree.
6. I’m using (possibly misusing) WD to mean something more specific like “given cause A, what is best to do?; what about under cause B? what about under discount x?...”
7. Now I’m confused about whether 3=7.
8. Yeah it’s effective in the short run, but I would guess that the loss of integrity hurts us in the long run.
Upvoted, though I was struck by this part of the appendix:
While I totally agree with the the conclusion of the post (the community should have a portfolio of causes, and not invest everything in the top cause), I feel very unsure that a lot of these reasons are good ones for spreading out from the most promising cause.
Or if they do imply spreading out, they don’t obviously justify the standard EA alternatives to AI Risk.
I noticed I felt like I was disagreeing with your reasons for not doing argmax throughout the post, and this list helped to explain why.
1. Starting with VOI, that assumes that you can get significant information about how good a cause is by having people work on it. In practice, a ton of uncertainty is about scale and neglectedness, and having people work on the cause doesn’t tell you much about that. Global priorities research usually seems more useful.
VOI would also imply working on causes that might be top, but that we’re very uncertain about. So, for example, that probably wouldn’t imply that that longtermist-interested people should work on global health or factory farming, but rather spread out over lots of weirder small causes, like those listed here: https://80000hours.org/problem-profiles/#less-developed-areas
2. “You don’t know the whole option set” sounds like a similar issue to VOI. It would imply trying to go and explore totally new areas, rather than working on familiar EA priorities.
3. Many approaches to moral uncertainty suggest that you factor in uncertainty in your choice of values, but then you just choose the best option with respect to those values. It doesn’t obviously suggest supporting multiple causes.
4. Concave altruism. Personally I think there are increasing returns on the level of orgs, but I don’t think there are significant increasing returns at the level of cause areas. (And that post is more about exploring the implications of concave altruism rather than making the case it actually applies to EA cause selection.)
5. Optimizer’s curse. This seems like a reason to think your best guess isn’t as good as you think, rather than to support multiple causes.
6. Worldview diversification. This isn’t really an independent reason to spread out – it’s just the name of Open Phil’s approach to spreading out (which they believe for other reasons).
7. Risk aversion. I don’t think we should be risk averse about utility, so agree with your low ranking of it.
8. Strategic skullduggery. This actually seems like one of the clearest reasons to spread out..
9. Decreased variance. I agree with you this is probably not a big factor.
You didn’t add diminishing returns to your list, though I think you’d rank it near the top. I’d also agree it’s a factor, though I also think it’s often oversold. E.g. if there are short-term bottlenecks in AI that create diminishing returns, it’s likely the best response is to invest in career capital and wait for the bottlenecks to disappear, rather than to switch into a totally different cause. You also need big increases in resources to get enough diminishing returns to change the cause ranking e.g. if you think AI safety is 10x as effective as pandemics at the margin, you might need to see the AI safety community roughly 10x in size relative to biosecurity before they’d equalise.
I tried to summarise what I think the good reasons for spreading out are here.
For a longtermist, I think those considerations would suggest a picture like:
50% into the top 1-3 issues
20% into the next couple of issues
20% into exploring a wide range of issues that might be top
10% into other popular issues
If I had to list a single biggest driver, it would be personal fit / idiosyncratic opportunities, which can easily produce orders of magnitude differences in what different people should focus on.
The question of how to factor in neartermism (or other alternatives to AI-focused longtermism) seems harder. It could easily imply still betting everything on AI, though putting some % of resources into neartermism in proportion to your credence in it also seems sensible.
Some more here about how worldview diversification can imply a wide range of allocations depending on how you apply it: https://twitter.com/ben_j_todd/status/1528409711170699264
3. Tarsney suggests one other plausible reason moral uncertainty is relevant: nonunique solutions leaving some choices undetermined. But I’m not clear on this.
Excellent comment, thanks!
Yes, wasn’t trying to endorse all of those (and should have put numbers on their dodginess).
1. Interesting. I disagree for now but would love to see what persuaded you of this. Fully agree that softmax implies long shots.
2. Yes, new causes and also new interventions within causes.
3. Yes, I really should have expanded this, but was lazy / didn’t want to disturb the pleasant brevity. It’s only “moral” uncertainty about how much risk aversion you should have that changes anything. (à la this.)
4. Agree.
5. Agree.
6. I’m using (possibly misusing) WD to mean something more specific like “given cause A, what is best to do?; what about under cause B? what about under discount x?...”
7. Now I’m confused about whether 3=7.
8. Yeah it’s effective in the short run, but I would guess that the loss of integrity hurts us in the long run.
Will edit in your suggestions, thanks again.