I just want to pick up quickly on something mentioned in footnote 6.
Iason writes “According to a recent survey of effective altruists 69 percent were consequentialists, 2% were deontologists, and 5% were virtue ethicists” (and 20% were “Other”), but it’s worth emphasising that we can’t take our survey to represent EA tout court, just our sample). That said, I think the claim in the main text that EA is “broadly consequentialist” in its “thick” mode, is one few would disagree with.
I’m a little surprised by some of the other claims about what EAs are like, such as (quoting Singer): “they tend to view values like justice, freedom, equality, and knowledge not as good in themselves but good because of the positive effect they have on social welfare.”
It may be true, but if so I need to do some updating. My own take is that those things are all inherently valuable, but (leaving aside far future and xrisk stuff), welfare is a better buy. I can’t necessarily assume many people in EA agree with me though.
There’s also some confusion in the language between what people in EA do, and what their representatives in GW and GWWC do. I’m thinking of:
(Effective altruists) assess the scale of global problems by looking at major reports and publications that document their impact on global well-being, often using cost-effectiveness analysis.
Interesting. My view is that EAs do tend to view these things as valuable only insofar as they serve wellbeing, at least in their explicit theorising and decision-making. That’s my person view anyway. I’d add the caveat that I think most people actually judge according to a more deontological folk morality implicitly when making moral judgements (i.e. we actually do think that fairness, and our beliefs being right, are important).
I think this varies a bit by cause area though. For example (and this is not necessarily a criticism) the animal rights (clue’s in the name) section seems much more deontological.
Some of those things I would just define in utilitarian terms. I would view justice as ‘the social arrangement that maximizes utility’, and the form of equality I value most highly is equal consideration of interests (of course, I value other forms of equality instrumentally).
As an animal rights EA involved in one of the more explicitly deontological organizations (DxE), I have to say there are more consequentialists than you’d think. I’m a consequentialist, for instance, but think rights and the like often have high instrumental value.
I just want to pick up quickly on something mentioned in footnote 6. Iason writes “According to a recent survey of effective altruists 69 percent were consequentialists, 2% were deontologists, and 5% were virtue ethicists” (and 20% were “Other”), but it’s worth emphasising that we can’t take our survey to represent EA tout court, just our sample). That said, I think the claim in the main text that EA is “broadly consequentialist” in its “thick” mode, is one few would disagree with.
I’m a little surprised by some of the other claims about what EAs are like, such as (quoting Singer): “they tend to view values like justice, freedom, equality, and knowledge not as good in themselves but good because of the positive effect they have on social welfare.”
It may be true, but if so I need to do some updating. My own take is that those things are all inherently valuable, but (leaving aside far future and xrisk stuff), welfare is a better buy. I can’t necessarily assume many people in EA agree with me though.
There’s also some confusion in the language between what people in EA do, and what their representatives in GW and GWWC do. I’m thinking of:
Interesting. My view is that EAs do tend to view these things as valuable only insofar as they serve wellbeing, at least in their explicit theorising and decision-making. That’s my person view anyway. I’d add the caveat that I think most people actually judge according to a more deontological folk morality implicitly when making moral judgements (i.e. we actually do think that fairness, and our beliefs being right, are important).
I think this varies a bit by cause area though. For example (and this is not necessarily a criticism) the animal rights (clue’s in the name) section seems much more deontological.
Some of those things I would just define in utilitarian terms. I would view justice as ‘the social arrangement that maximizes utility’, and the form of equality I value most highly is equal consideration of interests (of course, I value other forms of equality instrumentally).
As an animal rights EA involved in one of the more explicitly deontological organizations (DxE), I have to say there are more consequentialists than you’d think. I’m a consequentialist, for instance, but think rights and the like often have high instrumental value.