focus on working towards a future that is good from many moral perspectives.
I might disagree slightly here, or might frame things differently. I do take moral uncertainty, moral trade, cooperation, etc. quite seriously, and do think that those things push in favour of working towards a future that’s good from many moral perspectives. But I think we’d need more detailed analysis to say whether or not a given person, or EA as a whole, should focus on that goal.
It may even be that the best way to cooperate and maximise everyone’s values (in expectation) is to take a sort of portfolio approach across different values systems. That is, we might want many people to focus on working towards a future that’s excellent from a handful of moral perspectives and either ok or just slightly bad from other moral perspectives, but with this collectively representing a huge range of moral perspectives. This might be better due to specialisation.
E.g., some people might focus primarily on extinction risk reduction and some might focus primarily on fail-safe AI. Perhaps this results in a halving of both sets of risks, and perhaps that seems better to both sets of values than all of those people working on reducing risks of totalitarianism would. (I’m not saying this is the case; I see it merely as a plausible illustrative example.)
Note that this isn’t the same as “Just do what seems best to your own values”—it might be that a suffering-focused person works on extinction risk reduction while a non-suffering-focused person works on fail-safe AI, as a sort of moral trade. This arrangement could be best for both of their values if it suits their comparative advantages.
Did you mean “focus on working towards a future that is good from many moral perspectives” to be inclusive of taking that sort of portfolio approach, in which individual people might still focus on doing things that are primarily good based on one (set of) moral perspectives?
Yeah, I meant it to be inclusive of this “portfolio approach”. I agree that specialisation and comparative advantages (and perhaps also sheer motivation) can justify focusing on things that are primarily good based on one (set of) moral perspectives.
In that case, take my comment above as just long-winded agreement!
I think we could probably consider motivation (and thus “fit with one’s values”) as one component of/factor in comparative advantage, because it will tend to make a person better at something, likely to work harder at it, less likely to burn out, etc. Though motivation could sometimes be outweighed by other components of/factors in comparative advantage (e.g., a person’s current skills, credentials, and networks).
I might disagree slightly here, or might frame things differently. I do take moral uncertainty, moral trade, cooperation, etc. quite seriously, and do think that those things push in favour of working towards a future that’s good from many moral perspectives. But I think we’d need more detailed analysis to say whether or not a given person, or EA as a whole, should focus on that goal.
It may even be that the best way to cooperate and maximise everyone’s values (in expectation) is to take a sort of portfolio approach across different values systems. That is, we might want many people to focus on working towards a future that’s excellent from a handful of moral perspectives and either ok or just slightly bad from other moral perspectives, but with this collectively representing a huge range of moral perspectives. This might be better due to specialisation.
E.g., some people might focus primarily on extinction risk reduction and some might focus primarily on fail-safe AI. Perhaps this results in a halving of both sets of risks, and perhaps that seems better to both sets of values than all of those people working on reducing risks of totalitarianism would. (I’m not saying this is the case; I see it merely as a plausible illustrative example.)
Note that this isn’t the same as “Just do what seems best to your own values”—it might be that a suffering-focused person works on extinction risk reduction while a non-suffering-focused person works on fail-safe AI, as a sort of moral trade. This arrangement could be best for both of their values if it suits their comparative advantages.
Did you mean “focus on working towards a future that is good from many moral perspectives” to be inclusive of taking that sort of portfolio approach, in which individual people might still focus on doing things that are primarily good based on one (set of) moral perspectives?
Yeah, I meant it to be inclusive of this “portfolio approach”. I agree that specialisation and comparative advantages (and perhaps also sheer motivation) can justify focusing on things that are primarily good based on one (set of) moral perspectives.
In that case, take my comment above as just long-winded agreement!
I think we could probably consider motivation (and thus “fit with one’s values”) as one component of/factor in comparative advantage, because it will tend to make a person better at something, likely to work harder at it, less likely to burn out, etc. Though motivation could sometimes be outweighed by other components of/factors in comparative advantage (e.g., a person’s current skills, credentials, and networks).