“My view is that—for the most part—people who identify as EAs tend to have unusually high integrity. But my guess is that this is more despite utilitarianism than because of it.”
This seems unlikely to me. I think utilitarianism broadly encourages pro-social/cooperative behaviors, especially because utilitarianism encourages caring about collective success rather than individual success. Having a positive community and trust helps achieve these outcomes. If you have universalist moralities, it’s harder for defection to make sense.
Broadly, I think that worries about utilitarianism/consequentialism should lead to negative outcomes are often self defeating, because the utilitarians/consequentiality see the negative outcomes themselves. If you went around killing people for their organs, the consequences would obviously be negative; it’s the same for going around lying or being an asshole toe people all the time.
In practice, many of the utilitarians/consequentialists don’t see the negative outcomes themselves, or at least sufficiently many of them don’t that things will go to shit pretty quickly. (Relatedly, see the Unilateralists’ Curse, the Epistemic Prisoner’s Dilemma, and pretty much the entire literature of game theory, all those collective action problems...).
In addition to that, it’s important not just that you actually have high integrity but that people believe you do. And people will be rightly hesitant to believe that you do if you are going around saying that the morally correct thing to do is maximize expected utility but don’t worry it’s always and everywhere true that the way to maximize expected utility is to act as if you have high integrity. There are two strategies available, then: Actually have high integrity, which means not being 100% a utilitarian/consequentialist, or carry out an extremely convincing deception campaign to fool people into thinking you have high integrity. I recommend the former & if you attempt the latter, fuck you.
Yeah, utilitarianism also isn’t going to always (or even most of the time, depending on the flavor) be convergent on “pro-social/cooperative behaviors”. I think this is because it’s easy to forget that while utilitarianism does broadly work towards the good of the community, it does so in a way that aggregates individual utility and takes an individual’s experience to be the key building block of morality (as opposed to something like Communitarianism, which centers the good of the community and the sort of behavior you mention as a more base tenet of its practice). How much it will be convergent with these behaviors is certainly up for debate, but so long as the behaviors mentioned above are only useful towards increasing aggregate individual utility, you will have many places where this will diverge. This is perhaps harder to see when you imagine a polar extreme as you mention “lying or being an asshole to people all the time” but I don’t think anyone is worried about that for utilitarianism. More that they might follow down a successive path of deceit or overriding of other people’s interest towards what they see to be the greater good (i.e. “knowing” a friend would be better off if they didn’t have to bear the weight of some bad thing in the world that relates to them that they wouldn’t find out about if you don’t tell them—this seems like the sort of thing utilitarianism might justify but maybe shouldn’t).
“My view is that—for the most part—people who identify as EAs tend to have unusually high integrity. But my guess is that this is more despite utilitarianism than because of it.”
This seems unlikely to me. I think utilitarianism broadly encourages pro-social/cooperative behaviors, especially because utilitarianism encourages caring about collective success rather than individual success. Having a positive community and trust helps achieve these outcomes. If you have universalist moralities, it’s harder for defection to make sense.
Broadly, I think that worries about utilitarianism/consequentialism should lead to negative outcomes are often self defeating, because the utilitarians/consequentiality see the negative outcomes themselves. If you went around killing people for their organs, the consequences would obviously be negative; it’s the same for going around lying or being an asshole toe people all the time.
In practice, many of the utilitarians/consequentialists don’t see the negative outcomes themselves, or at least sufficiently many of them don’t that things will go to shit pretty quickly. (Relatedly, see the Unilateralists’ Curse, the Epistemic Prisoner’s Dilemma, and pretty much the entire literature of game theory, all those collective action problems...).
In addition to that, it’s important not just that you actually have high integrity but that people believe you do. And people will be rightly hesitant to believe that you do if you are going around saying that the morally correct thing to do is maximize expected utility but don’t worry it’s always and everywhere true that the way to maximize expected utility is to act as if you have high integrity. There are two strategies available, then: Actually have high integrity, which means not being 100% a utilitarian/consequentialist, or carry out an extremely convincing deception campaign to fool people into thinking you have high integrity. I recommend the former & if you attempt the latter, fuck you.
Yeah, utilitarianism also isn’t going to always (or even most of the time, depending on the flavor) be convergent on “pro-social/cooperative behaviors”. I think this is because it’s easy to forget that while utilitarianism does broadly work towards the good of the community, it does so in a way that aggregates individual utility and takes an individual’s experience to be the key building block of morality (as opposed to something like Communitarianism, which centers the good of the community and the sort of behavior you mention as a more base tenet of its practice). How much it will be convergent with these behaviors is certainly up for debate, but so long as the behaviors mentioned above are only useful towards increasing aggregate individual utility, you will have many places where this will diverge. This is perhaps harder to see when you imagine a polar extreme as you mention “lying or being an asshole to people all the time” but I don’t think anyone is worried about that for utilitarianism. More that they might follow down a successive path of deceit or overriding of other people’s interest towards what they see to be the greater good (i.e. “knowing” a friend would be better off if they didn’t have to bear the weight of some bad thing in the world that relates to them that they wouldn’t find out about if you don’t tell them—this seems like the sort of thing utilitarianism might justify but maybe shouldn’t).