Hi, thanks for writing this. As others have pointed out I am a bit confused how the conclusion (more diversification in EA careers etc) follows from the assumption (high uncertainty about cause prioritisations.
You might think that we should be risk averse with respect to our difference-making, i.e. that the EA community does some good in many worlds. See here a summary post from me which collects the arguments against the “risk averse difference-making” view. One might still justify increased diversification for instrumental reasons (e.g. welcomingness of the community), but I don’t think that’s what you explicitly argue for.
You might think that updating that we are more uncertain means that we are more likely to change our minds about causes in the future. If we change our minds about priorities in e.g. 2 or 10 years , it is really advantageous if X members of the community already worked in the relevant cause area. Hence, we should spread out.
However, I don’t think that this argument works. First, more uncertainty now might also mean more uncertainty later—hence unclear that I should update that it is more likely that we will change our mind
Secondly, if you think that we can resolve that uncertainty and update in the future, then I think this is a reason for people to work as cause prioritisation researchers and not a reason to spread out among more cause areas.
Hi, thanks for writing this. As others have pointed out I am a bit confused how the conclusion (more diversification in EA careers etc) follows from the assumption (high uncertainty about cause prioritisations.
You might think that we should be risk averse with respect to our difference-making, i.e. that the EA community does some good in many worlds. See here a summary post from me which collects the arguments against the “risk averse difference-making” view. One might still justify increased diversification for instrumental reasons (e.g. welcomingness of the community), but I don’t think that’s what you explicitly argue for.
You might think that updating that we are more uncertain means that we are more likely to change our minds about causes in the future. If we change our minds about priorities in e.g. 2 or 10 years , it is really advantageous if X members of the community already worked in the relevant cause area. Hence, we should spread out.
However, I don’t think that this argument works. First, more uncertainty now might also mean more uncertainty later—hence unclear that I should update that it is more likely that we will change our mind
Secondly, if you think that we can resolve that uncertainty and update in the future, then I think this is a reason for people to work as cause prioritisation researchers and not a reason to spread out among more cause areas.