Please consider this for the contest on criticism. (Or if this isn’t the way to enter, please let me know.)
First, in the spirit of friendly critique, the organization would surely serve its goals better by stating them consistently. The name of the organization and the mission that follows right after the name are inconsistent. Altruism means leaving yourself out of your decision-making process and focusing on the good of others. Doing the most possible good—assuming one is affected by one’s actions—requires including oneself in the calculus. If each is to count for one, per Bentham, then I should count no less than others. For the sake of what follows I will assume that the second goal, doing the most good, is the organization’s priority, rather than the first.
I share (but won’t defend here) a general commitment to the utilitarian goal of maximizing aggregate utility, defined as the net of aggregate happiness minus aggregate misery. This leaves many questions unanswered, of course, including the question of whether the utility in question includes that of sentient non-humans. It also leaves unaddressed the paradox put forward by Parfit, whether maximizing aggregate utility is desirable when achieved by multiplying the population by a large number while lowering the average utility by a large number, such that the aggregate gain is not quite offset.
The starting point of my critique is the idea that what has to count for a consistent utilitarian with a platform is not the utility of a candidate action or pattern of action. What has to count, rather, is the utility of the praise, inculcation, and promotion of that action or pattern. These two things are different. (The distinction seems to have been first proposed by Mill in his Autobiography.)
To illustrate the point, what if the way to make the most difference in practice doesn’t involve advocating making the most difference? How might this be the case? It might be the case if the message was likely to fall on deaf ears, a possibility raised by the Parable of the Sower in the Gospel according to St. Luke. It might be the case if the message wasn’t just ignored but counterproductive, perhaps because those advocating it unintentionally made themselves look like other-worldly idealists to their fellows, so that people leaned the opposite way just to express their resentment for the advocates? Likeliest of all, perhaps, what if the standard was so high that even people who took the message seriously in theory fell so short in practice that more achievable ideals, which would have done more good if they had been achieved, were never considered?
A nuance here is that the practice of preaching “altruism” rather than doing the most good may be warranted by the very distinction I am citing here between the utility of the act and the utility of the praise.
What I mean is this. Altruism may have evolved within New Testament ethics as a way to leverage the (supposed) fact that people fall short of the ideals they are taught in order to promote the goal of utility maximization. If I am told to put “the other guy” first, but my natural tendency is to backslide away from my ideals in the direction of over-counting my own interests, the net may be that I end up counting my own interests and those of the other guy equally, which would tend towards utility maximizing outcomes. This logic would lead us to expect that deeply utilitarian people who held a more optimistic view of human nature would be less eager to praise altruism, since they wouldn’t be as anxious to compensate for a supposed tendency to backslide towards selfishness. In that connection it is interesting to note that the more legalistic and less “turn the other cheek” ethics found in the Old Testament correlates with the absence from the Old Testament of the doctrine of Original Sin.
To end on a constructive note, perhaps the EA group should divert some of its resources into the investigation of what works, not in the sense of what activities on the part of its members and followers will maximize utility but in the sense of what kinds of education and organizing on its own part will tend to maximize utility, leaving open the possibility that continuing to advocate doing the most good (or even retaining the name EA) is not the way to generate the most good.
Please consider this for the contest on criticism. (Or if this isn’t the way to enter, please let me know.)
First, in the spirit of friendly critique, the organization would surely serve its goals better by stating them consistently. The name of the organization and the mission that follows right after the name are inconsistent. Altruism means leaving yourself out of your decision-making process and focusing on the good of others. Doing the most possible good—assuming one is affected by one’s actions—requires including oneself in the calculus. If each is to count for one, per Bentham, then I should count no less than others. For the sake of what follows I will assume that the second goal, doing the most good, is the organization’s priority, rather than the first.
I share (but won’t defend here) a general commitment to the utilitarian goal of maximizing aggregate utility, defined as the net of aggregate happiness minus aggregate misery. This leaves many questions unanswered, of course, including the question of whether the utility in question includes that of sentient non-humans. It also leaves unaddressed the paradox put forward by Parfit, whether maximizing aggregate utility is desirable when achieved by multiplying the population by a large number while lowering the average utility by a large number, such that the aggregate gain is not quite offset.
The starting point of my critique is the idea that what has to count for a consistent utilitarian with a platform is not the utility of a candidate action or pattern of action. What has to count, rather, is the utility of the praise, inculcation, and promotion of that action or pattern. These two things are different. (The distinction seems to have been first proposed by Mill in his Autobiography.)
To illustrate the point, what if the way to make the most difference in practice doesn’t involve advocating making the most difference? How might this be the case? It might be the case if the message was likely to fall on deaf ears, a possibility raised by the Parable of the Sower in the Gospel according to St. Luke. It might be the case if the message wasn’t just ignored but counterproductive, perhaps because those advocating it unintentionally made themselves look like other-worldly idealists to their fellows, so that people leaned the opposite way just to express their resentment for the advocates? Likeliest of all, perhaps, what if the standard was so high that even people who took the message seriously in theory fell so short in practice that more achievable ideals, which would have done more good if they had been achieved, were never considered?
A nuance here is that the practice of preaching “altruism” rather than doing the most good may be warranted by the very distinction I am citing here between the utility of the act and the utility of the praise.
What I mean is this. Altruism may have evolved within New Testament ethics as a way to leverage the (supposed) fact that people fall short of the ideals they are taught in order to promote the goal of utility maximization. If I am told to put “the other guy” first, but my natural tendency is to backslide away from my ideals in the direction of over-counting my own interests, the net may be that I end up counting my own interests and those of the other guy equally, which would tend towards utility maximizing outcomes. This logic would lead us to expect that deeply utilitarian people who held a more optimistic view of human nature would be less eager to praise altruism, since they wouldn’t be as anxious to compensate for a supposed tendency to backslide towards selfishness. In that connection it is interesting to note that the more legalistic and less “turn the other cheek” ethics found in the Old Testament correlates with the absence from the Old Testament of the doctrine of Original Sin.
To end on a constructive note, perhaps the EA group should divert some of its resources into the investigation of what works, not in the sense of what activities on the part of its members and followers will maximize utility but in the sense of what kinds of education and organizing on its own part will tend to maximize utility, leaving open the possibility that continuing to advocate doing the most good (or even retaining the name EA) is not the way to generate the most good.
You can write this as a post and tag as “Criticism Contest” or submit it here for the contest