David Moss brings up the question of why EAs are disproportionately consequentialist in the Facebook thread:
“This kinda begs the question of what consequentialism is good for and why it seems to have an affinity for EA. A couple of suggestions: consequentialism is great for i) mandating (currently) counter-intuitive approaches (like becoming really rich to help reduce poverty) and ii) being really demanding relative to (currently) standard levels of demandingess (i.e. give away money until it stops being useful; not give away £5 a month if that doesn’t really detract from your happiness in any way). These benefits to consequentialism are overturned in cases where i) your desired moral outcome is not counter-intuitive (if people are already inclined to think you should never harm innocent creatures or should always be a good ally, then consequentialism just makes people have to shoulder a, potentially very difficult, burden of proof, to show that their preferred action is actually helpful in this case), ii) if people were inclined to think that something is something that you should never do, as a rule, then consequentialism just makes people more open to potentially trading-off and doing things they otherwise would never do, in the right circumstances.”
These two factors may partly explain why EAs are disproportionately consequentialist, but I’m not convinced they’re the main explanation. I don’t know what that explanation is, but I think other factors include that:
a) consequentialism is a contrarian, counter-intuitive moral position, and EA can be too
b) consequentialism goes along with a quantitative mindset
c) many EAs were recruited through people’s social circles, and the seeds for these were often consequentialist or philosophical (studying philosophy being a risk factor for consequentialism)
I agree that the core EA tenets make sense also according to most non-consequentialist views. But consequentialism might be better at activating people because it has very concrete implications. It seems to me that non-consequentialist positions are often vague when it comes to practical application, which makes it easy for adherents to not really do much. In addition, adherence to consequentialism correlates with analytical thinking skills and mindware such as expected utility theory, which is central to understanding/internalizing the EA concept of cost-effectiveness. Finally, there’s a tension between agent-relative positions and cause neutrality, so consequentialism selects for people who are more likely going to be on board with that.
This. Another core EA tenet might be that non-human animals count (if they are sentient).
Kantianism has positive duties and Kant’s “realm of ends” to me sounds very much like taking into account “the instrumental rationality of charitable giving”. Kant himself didn’t grant babies or non-human animals intrinsic moral status, but some Kantian philosophers, most notably Korsgard, have given good arguments as to why sentientism should follow from the categorical imperative.
Virtue ethics can be made to fit almost anything, so it seems easy to argue for the basic tenets of EA within that framework.
Some forms of contractualism do not have positive rights, so these forms would be in conflict with EA. But if you ground contractualism in reasoning from behind the veil of ignorance, as did Rawls, then EA principles, perhaps in more modest application (even though it is unclear to me why the veil of ignorance approach shouldn’t output utilitarianism), will definitely follow from the theory. Contractualism that puts weight on reciprocity would not take non-human animals into consideration, but there, too, you have contractualists arguing in favor of sentientism, e.g. Mark Rowlands.
I was mostly referring to the vast majority of people who are disposed, for natural and extra-rational reasons, to generally want to help people. I’m rather sceptical of subsuming the gamut of the history of moral philosophy into EA. I suppose, and its increasingly so right now, such concerns might be incorporated into neo-Kantianism and virtue ethics; but then that’s a rather wide remit, one can do almost anything with a theoretical body if one does not care for the source material. The big change is ethical partialism: until now, very few thought their moral obligations to hold equivalently across those inside and outside one’s society. Even the history of cosmopolitanism, namely in Stoic and late eighteenth century debates in Germany, refuses as much: grounding particularistic duties, pragmatically or otherwise, as much as ethical impartialism.
Kant, for example, wrote barely anything on distributive justice, leaving historians to piece together rather lowly accounts, and absolutely nothing on international distributive justice (although he had an account of cosmopolitan right, namely of a right to hospitality, that is, to being able to request interaction with others who may decline except when such would ensure their demise—anticipating refugee rights, but nothing more). The most radical reading of Kant’s account of distributive justice (and many reputable thinkers have concluded him to be a proto-Nozick) is that a condition of the perpetuation of republican co-legislation, itself demanded by external freedom, is the perpetuation of its constituent citizenship. The premise for which is obviously domestic. It seems that Kant did advocate a world state, at which time the justification would cross over to the global; prior to which, however, on even this most radical account, he appears to deny international distributive justice flatly.
As for Rawls, his global distributive minimalism is well-known, but probably contingently justifies altruism to his so-called burdened societies. That the veil of ignorance (which is basically the sum of its parts, and is thus superfluous to the justification, being expressly a mere contrivance to make visible its conditions) yields the two principles of justice, and not utilitarianism, is rather fundamental to it: in such a situation self-interested representative agents would not elect principles which might, given the contingent and thus unknown balance of welfare in a system, license their indigence, abuse or execution. When the conditions of justice hold, namely an economic capacity to ensure relatively decent lives for a society, then liberty is of foremost concern to persons conceived as rational and reasonable, as they are by Rawls.
I suspect consequentialism and EA correlates heavily because of EA’s focus on helping others, instead of making themselves or their own actions more “moral”. Focusing on helping others necessarily leads to caring about the consequences of one’s actions instead of caring about how the actions reflect upon one’s moral character or how moral the actions themselves are.
This “other-centeredness” is at least the reason why my own values are consequentialist.
These two factors may partly explain why EAs are disproportionately consequentialist, but I’m not convinced they’re the main explanation. I don’t know what that explanation is...
I would guess the simpler option that (virtually all actually supported) forms of consequentialism imply EA, whereas other moral theories, if they imply anything relevant at all, tend to imply it’s optional.
One exception to consequentialisms implying EA, for e.g. is Randian Objectivism. And I doubt it’s a coincidence that the EA movement contains a very small number of (I know of 0) Randian Objectivists ;)
David Moss brings up the question of why EAs are disproportionately consequentialist in the Facebook thread:
“This kinda begs the question of what consequentialism is good for and why it seems to have an affinity for EA. A couple of suggestions: consequentialism is great for i) mandating (currently) counter-intuitive approaches (like becoming really rich to help reduce poverty) and ii) being really demanding relative to (currently) standard levels of demandingess (i.e. give away money until it stops being useful; not give away £5 a month if that doesn’t really detract from your happiness in any way). These benefits to consequentialism are overturned in cases where i) your desired moral outcome is not counter-intuitive (if people are already inclined to think you should never harm innocent creatures or should always be a good ally, then consequentialism just makes people have to shoulder a, potentially very difficult, burden of proof, to show that their preferred action is actually helpful in this case), ii) if people were inclined to think that something is something that you should never do, as a rule, then consequentialism just makes people more open to potentially trading-off and doing things they otherwise would never do, in the right circumstances.”
These two factors may partly explain why EAs are disproportionately consequentialist, but I’m not convinced they’re the main explanation. I don’t know what that explanation is, but I think other factors include that:
a) consequentialism is a contrarian, counter-intuitive moral position, and EA can be too
b) consequentialism goes along with a quantitative mindset
c) many EAs were recruited through people’s social circles, and the seeds for these were often consequentialist or philosophical (studying philosophy being a risk factor for consequentialism)
I agree that the core EA tenets make sense also according to most non-consequentialist views. But consequentialism might be better at activating people because it has very concrete implications. It seems to me that non-consequentialist positions are often vague when it comes to practical application, which makes it easy for adherents to not really do much. In addition, adherence to consequentialism correlates with analytical thinking skills and mindware such as expected utility theory, which is central to understanding/internalizing the EA concept of cost-effectiveness. Finally, there’s a tension between agent-relative positions and cause neutrality, so consequentialism selects for people who are more likely going to be on board with that.
Like which ones?
Helping other people more rather than less and, consequently, the instrumental rationality of charitable giving?
This. Another core EA tenet might be that non-human animals count (if they are sentient).
Kantianism has positive duties and Kant’s “realm of ends” to me sounds very much like taking into account “the instrumental rationality of charitable giving”. Kant himself didn’t grant babies or non-human animals intrinsic moral status, but some Kantian philosophers, most notably Korsgard, have given good arguments as to why sentientism should follow from the categorical imperative.
Virtue ethics can be made to fit almost anything, so it seems easy to argue for the basic tenets of EA within that framework.
Some forms of contractualism do not have positive rights, so these forms would be in conflict with EA. But if you ground contractualism in reasoning from behind the veil of ignorance, as did Rawls, then EA principles, perhaps in more modest application (even though it is unclear to me why the veil of ignorance approach shouldn’t output utilitarianism), will definitely follow from the theory. Contractualism that puts weight on reciprocity would not take non-human animals into consideration, but there, too, you have contractualists arguing in favor of sentientism, e.g. Mark Rowlands.
I was mostly referring to the vast majority of people who are disposed, for natural and extra-rational reasons, to generally want to help people. I’m rather sceptical of subsuming the gamut of the history of moral philosophy into EA. I suppose, and its increasingly so right now, such concerns might be incorporated into neo-Kantianism and virtue ethics; but then that’s a rather wide remit, one can do almost anything with a theoretical body if one does not care for the source material. The big change is ethical partialism: until now, very few thought their moral obligations to hold equivalently across those inside and outside one’s society. Even the history of cosmopolitanism, namely in Stoic and late eighteenth century debates in Germany, refuses as much: grounding particularistic duties, pragmatically or otherwise, as much as ethical impartialism.
Kant, for example, wrote barely anything on distributive justice, leaving historians to piece together rather lowly accounts, and absolutely nothing on international distributive justice (although he had an account of cosmopolitan right, namely of a right to hospitality, that is, to being able to request interaction with others who may decline except when such would ensure their demise—anticipating refugee rights, but nothing more). The most radical reading of Kant’s account of distributive justice (and many reputable thinkers have concluded him to be a proto-Nozick) is that a condition of the perpetuation of republican co-legislation, itself demanded by external freedom, is the perpetuation of its constituent citizenship. The premise for which is obviously domestic. It seems that Kant did advocate a world state, at which time the justification would cross over to the global; prior to which, however, on even this most radical account, he appears to deny international distributive justice flatly.
As for Rawls, his global distributive minimalism is well-known, but probably contingently justifies altruism to his so-called burdened societies. That the veil of ignorance (which is basically the sum of its parts, and is thus superfluous to the justification, being expressly a mere contrivance to make visible its conditions) yields the two principles of justice, and not utilitarianism, is rather fundamental to it: in such a situation self-interested representative agents would not elect principles which might, given the contingent and thus unknown balance of welfare in a system, license their indigence, abuse or execution. When the conditions of justice hold, namely an economic capacity to ensure relatively decent lives for a society, then liberty is of foremost concern to persons conceived as rational and reasonable, as they are by Rawls.
I suspect consequentialism and EA correlates heavily because of EA’s focus on helping others, instead of making themselves or their own actions more “moral”. Focusing on helping others necessarily leads to caring about the consequences of one’s actions instead of caring about how the actions reflect upon one’s moral character or how moral the actions themselves are.
This “other-centeredness” is at least the reason why my own values are consequentialist.
I would guess the simpler option that (virtually all actually supported) forms of consequentialism imply EA, whereas other moral theories, if they imply anything relevant at all, tend to imply it’s optional.
One exception to consequentialisms implying EA, for e.g. is Randian Objectivism. And I doubt it’s a coincidence that the EA movement contains a very small number of (I know of 0) Randian Objectivists ;)