If effective altruists aren’t perfect utilitarians because they’re human, and humans can’t be perfect utilitarians because they’re human, maybe the problem is effective altruists trying to be perfect utilitarians despite their inability to do so, and that’s why they make mistakes. What do you think of that?
I don’t think this gets us very far. You’re making a utilitarian argument (or certainly an argument consistent with utilitarianism) in favour of not trying to be a perfect utilitarian. Paradoxically, this is what a perfect utilitarian would do given the information that they have about their own limits—they’re human, as you put it. For someone such as myself who believes that utilitarianism is likely to be objectively true, therefore, I already know not to be a perfectionist.
Ultimately, Singer put it best: do the most good that you can do.
If effective altruists aren’t perfect utilitarians because they’re human, and humans can’t be perfect utilitarians because they’re human, maybe the problem is effective altruists trying to be perfect utilitarians despite their inability to do so, and that’s why they make mistakes. What do you think of that?
I don’t think this gets us very far. You’re making a utilitarian argument (or certainly an argument consistent with utilitarianism) in favour of not trying to be a perfect utilitarian. Paradoxically, this is what a perfect utilitarian would do given the information that they have about their own limits—they’re human, as you put it. For someone such as myself who believes that utilitarianism is likely to be objectively true, therefore, I already know not to be a perfectionist.
Ultimately, Singer put it best: do the most good that you can do.