āThe EA Infrastructure Fund will fund and support projects that build and empower the community of people trying to identify actions that do the greatest good from a scope-sensitive and impartial welfarist view.ā
Iām curious how EA Funds incorporates moral uncertainty into its decision making given its mandate is 100% welfarist. To be clear, I donāt think running one project that is 100% welfarist necessarily contradicts with plausible views on moral uncertainty. I think welfarism is massively underrepresented in most peopleās decision making and to compensate for that one might run a 100% welfarist project despite having credence in multiple theories.
I know this is not within the scope EAIF but I think this example from animal welfare illustrates a trade-off well. Some countries have passed legislation to ban the culling of male chicks in the egg industry. Male chicks wonāt be born in those countries. Working on these bans is a moral priority if you think acts of killing are intrinsically bad. If you think welfare is all that matters then working on this issue is far lower in priority since male chicks live for three days at most and their life experiences are dwarfed by the life experiences of other animals. Would EA Funds prefer people coming into EA to be 100% welfarist with respect to projects they choose to work on?
I had similar conundrums when drafting a vision and mission for my organisation, ie. how to keep our edge while being clear about taking moral uncertainty seriously. So Iām curious about how EA Funds thinks about this issue.
Iām not too worried about this kind of moral uncertainty. I think that moral uncertainty is mostly action-relevant when one moral view is particularly āgrabbyā or the methodology you use to analyse an intervention seems to favour one view over another unfairly.
In both cases, I think the actual reason for concern is quite slippery and difficult for me to articulate well (which normally means that I donāt understand it well). I tend to think that the best policy is to maximise the expected outcomes of the overall decision-making policy (which involves paying attention to decision theory, common sense morality, deontological constraints etc. ).
In any case, most of my moral uncertainty worry comes from maximising very hard on a narrow worldview (or set of metrics) - but I think that āwelfarismā is sufficiently broad and the mandate and track record of the EAIF is sufficiently varied that I am not particularly worried about this class of concerns.
Not to detract from the general point, but there are welfarist views that can accommodate chick culling being very bad, like critical level utilitarianism. I donāt think theyāre very popular, though.
āThe EA Infrastructure Fund will fund and support projects that build and empower the community of people trying to identify actions that do the greatest good from a scope-sensitive and impartial welfarist view.ā
Iām curious how EA Funds incorporates moral uncertainty into its decision making given its mandate is 100% welfarist. To be clear, I donāt think running one project that is 100% welfarist necessarily contradicts with plausible views on moral uncertainty. I think welfarism is massively underrepresented in most peopleās decision making and to compensate for that one might run a 100% welfarist project despite having credence in multiple theories.
I know this is not within the scope EAIF but I think this example from animal welfare illustrates a trade-off well. Some countries have passed legislation to ban the culling of male chicks in the egg industry. Male chicks wonāt be born in those countries. Working on these bans is a moral priority if you think acts of killing are intrinsically bad. If you think welfare is all that matters then working on this issue is far lower in priority since male chicks live for three days at most and their life experiences are dwarfed by the life experiences of other animals. Would EA Funds prefer people coming into EA to be 100% welfarist with respect to projects they choose to work on?
I had similar conundrums when drafting a vision and mission for my organisation, ie. how to keep our edge while being clear about taking moral uncertainty seriously. So Iām curious about how EA Funds thinks about this issue.
Iām not too worried about this kind of moral uncertainty. I think that moral uncertainty is mostly action-relevant when one moral view is particularly āgrabbyā or the methodology you use to analyse an intervention seems to favour one view over another unfairly.
In both cases, I think the actual reason for concern is quite slippery and difficult for me to articulate well (which normally means that I donāt understand it well). I tend to think that the best policy is to maximise the expected outcomes of the overall decision-making policy (which involves paying attention to decision theory, common sense morality, deontological constraints etc. ).
In any case, most of my moral uncertainty worry comes from maximising very hard on a narrow worldview (or set of metrics) - but I think that āwelfarismā is sufficiently broad and the mandate and track record of the EAIF is sufficiently varied that I am not particularly worried about this class of concerns.
Not to detract from the general point, but there are welfarist views that can accommodate chick culling being very bad, like critical level utilitarianism. I donāt think theyāre very popular, though.