There was a bit of discussion on Twitter about this post. Rob Bensinger had a thread that included this comment:
Someone donating 20 minutes of their time to working in a soup kitchen could say they’re “promoting the general welfare”. You need something here about maximizing.
One (maybe slight boring) option would be something like “soft welfare-maximisation”, where “soft” just means that it can be subjected to various constraints.
Well, that’s not what ‘effective altruism’ means, right? At least on some understandings of the term, EA is not even a normative view; it’s rather a project that people can engage in for a variety of reasons. E.g. “excited altruists” do not, as such, embrace “beneficentrism”. (Though I would personally agree that the latter is an excellent reason for becoming involved with EA.)
There was a bit of discussion on Twitter about this post. Rob Bensinger had a thread that included this comment:
One (maybe slight boring) option would be something like “soft welfare-maximisation”, where “soft” just means that it can be subjected to various constraints.
Another term for a related concept is Richard Ngo’s “scope-sensitive ethics” (or “scale sensitive” as Ben Todd suggests), which he takes to “the core intuition motivating utilitarianism”. However, that doesn’t include any explicit reference to welfare or maximisation.
Is there anything wrong just with ‘effective altruism’ as the name?
Well, that’s not what ‘effective altruism’ means, right? At least on some understandings of the term, EA is not even a normative view; it’s rather a project that people can engage in for a variety of reasons. E.g. “excited altruists” do not, as such, embrace “beneficentrism”. (Though I would personally agree that the latter is an excellent reason for becoming involved with EA.)