Its not really a motte and bailey if the bailey and the motte are not different. I don’t think most EA people believe or think they are about maximizing total expected utility according to a hedonistic utilitarian framework. Even assuming that were true though, its hard to see how the end result would necessarily differ markedly from. “want to donate a lot and make sure their donations count.” which is a conclusion that is perfectly consistent with any number of ethical frameworks including the ethical framework used by 99% of us 99% of the time (including of utilitarians) thinkingaboutitaswegoalongism.
What you are not quite saying, but is implicit in your “utilitarian framework” is that there is an element within EA, mostly via a certain website—who see it as an explicit motte and bailey tool. Some of the users of that other site have been explicit in the past that they are using GiveWell and EA as a soft “recruiting tool” believing that once they have gotten people to sign up to: “donate a lot and make sure their donations count.” they can modify individuals preferences about what “donations count” means (perhaps by presenting them with certain utilitarian arguments) and get them to switch from extreme poverty causes to their preferred esoteric AI and Future risk stuff.
But they are not “most EA’s”—In monetary and numerical terms they are small compared to the numbers groups like GW, TLYCS and GWWC have. Its not even most Future risk or AGI people, most of whom wear their allegiances and weirdness points quite openly.
Its not really a motte and bailey if the bailey and the motte are not different. I don’t think most EA people believe or think they are about maximizing total expected utility according to a hedonistic utilitarian framework. Even assuming that were true though, its hard to see how the end result would necessarily differ markedly from. “want to donate a lot and make sure their donations count.” which is a conclusion that is perfectly consistent with any number of ethical frameworks including the ethical framework used by 99% of us 99% of the time (including of utilitarians) thinkingaboutitaswegoalongism.
What you are not quite saying, but is implicit in your “utilitarian framework” is that there is an element within EA, mostly via a certain website—who see it as an explicit motte and bailey tool. Some of the users of that other site have been explicit in the past that they are using GiveWell and EA as a soft “recruiting tool” believing that once they have gotten people to sign up to: “donate a lot and make sure their donations count.” they can modify individuals preferences about what “donations count” means (perhaps by presenting them with certain utilitarian arguments) and get them to switch from extreme poverty causes to their preferred esoteric AI and Future risk stuff.
But they are not “most EA’s”—In monetary and numerical terms they are small compared to the numbers groups like GW, TLYCS and GWWC have. Its not even most Future risk or AGI people, most of whom wear their allegiances and weirdness points quite openly.