we, in the modern era, may also be unknowingly guilty of …
Maybe substitute “guilty” for “responsible”?
Utilitarianism cares not only about the wellbeing of humans, but also about the wellbeing of non-human animals. Consequently, utilitarianism rejects speciesism, a form of discrimination against those who do not belong to a certain species.
There is a part of me which dislikes you presenting utilitarism which includes animals as the standard form of utilitarism. I think that utilitarianism + nonspeciesm falls under the “right but not trivial” category, and that a lot of legwork has to be done before you can get people to accept it, and further that this legwork must be done, instead of sliding over the inferential distance. Because of this, I’d prefer you to disambiguate between versions of utilitarianism which aggregate over humans, and those who aggregate over all sentient/conscious beings, and maybe point out how this developed over time (i.e., Peter Singer had to come and make the argument forcefully, because before it was not obvious)? For example, the Wikipedia entry on utilitarianism has a whole section on Humans alone, or other sentient beings?.
Similarly, maybe you would also want to disambiguate a little bit more between effective altruism and utilitarianism, and explicitly mention it when you’re linking it to effective altruism websites, or use effective altruism examples?
Also, what’s up with attributing the veil of ignorance to Harsanyi but not mentioning Rawls?
The section on Multi-level Utilitarianism Versus Single-level Utilitarianism seems exceedingly strange. In particular, you can totally use utilitarianism as a decision procedure (and if you don’t, what’s the point?). The fact that you don’t have the processing power of a supercomputer and perfect information doesn’t mean that you can’t approximate it as best you can.
For example, if I buy eggs which come from less shitty farmers, or if I decide to not buy eggs in order to reduce factory farming, I’m using utilitarianism as a decision procedure. Even though I can’t discern the exact effects of the action, I can discern that the action has positive expected value.
I don’t fall into recursive loops trying to compute how much compute I should use to compute the expected value of an action because I’m not an easily disabled robot in a film. But I do sometimes go up several levels of recursion, depending on the importance of the decision. I use heuristics like I use low degree Taylor polynomials.
(I also don’t always instantiate utilitarianism. But when I do, I do use it as a decision procedure)
In contrast, to our knowledge no one has ever defended single-level utilitarianism [i.e., that utilitarianism should be a decision procedure]
You know what, I defend single-level utilitarianism as a straightforward application of utilitarianism + bounded computing power / bounded rationality, and have the strong intuition that if utilitarianism isn’t a decision rule, then there’s no point to it. Fight me. (But also, feel free not to if you calculate that you have better things to do).
A common objection to multi-level utilitarianism is that it is self-effacing. A theory is said to be (partially) self-effacing if it (sometimes) directs its adherents to follow a different theory. Multi-level utilitarianism often forbids using the utilitarian criterion when we make decisions, instead recommending to act in accordance with non-utilitarian heuristics. However, there is nothing inconsistent about saying that your criterion of moral rightness comes apart from the decision procedure it recommends, and it does not mean that the theory fails.
I have different intuitions which strongly go in the other direction.
There is a part of me which dislikes you presenting utilitarianism which includes animals as the standard form of utilitarianism. (...) I’d prefer you to disambiguate between versions of utilitarianism which aggregate over humans, and those who aggregate over all sentient/conscious beings, and maybe point out how this developed over time (i.e., Peter Singer had to come and make the argument forcefully, because before it was not obvious)?
My impression is that the major utilitarian academics were rather united in extending equal moral consideration to non-human animals (in line with technicalities’ comment). I’m not aware of any influential attempts to promote a version of utilitarianism that explicitly does not include the wellbeing of non-human animals (though, for example, a preference utilitarian may give different weight to some non-human animals than a hedonistic utilitarian would). In the future, I hope we’ll be able to add more content to the website on the link between utilitarianism and anti-speciesism, with the intention of bridging the inferential distance to which you rightly point.
Similarly, maybe you would also want to disambiguate a little bit more between effective altruism and utilitarianism, and explicitly mention it when you’re linking it to effective altruism websites, or use effective altruism examples?
In the section on effective altruism on the website, we already explicitly disambiguate between EA and utilitarianism. I don’t currently see the need to e.g. add a disclaimer when we link to GiveWell’s website on Utilitarianism.net, but we do include disclaimers when we link to one of the organisations co-founded by Will (e.g. “Note that Professor William MacAskill, coauthor of this website, is a cofounder of 80,000 Hours.”)
Also, what’s up with attributing the veil of ignorance to Harsanyi but not mentioning Rawls?
We hope to produce a longer article on how the Veil of Ignorance argument relates to utilitarianism at some point. We currently include a footnote on the website, saying that “This [Veil of Ignorance] argument was originally proposed by Harsanyi, though nowadays it is more often associated with John Rawls, who arrived at a different conclusion.” For what it’s worth, Harsanyi’s version of the argument seems more plausible than Rawls’ version. Will commented on this matter in his first appearance on the 80,000 Hours Podcast, saying that “I do think he [Rawls] was mistaken. I think that Rawls’s Veil of Ignorance argument is the biggest own goal in the history of moral philosophy. I also think it’s a bit of a travesty that people think that Rawls came up with this argument. In fact, he acknowledged that he took it from Harsayni and changed it a little bit.”
The section on Multi-level Utilitarianism Versus Single-level Utilitarianism seems exceedingly strange. In particular, you can totally use utilitarianism as a decision procedure (and if you don’t, what’s the point?).
Historically, one of the major criticisms of utilitarianism was that it supposedly required us to calculate the expected consequences of our actions all the time, which would indeed be impractical. However, this is not true, since it conflates using utilitarianism as a decision procedure and as a criterion or rightness. The section on multi-level utilitarianism aims to clarify this point. Of course, multi-level utilitarianism does still permit attempting to calculate the expected consequences of ones actions in certain situations, but it makes it clear that doing so all the time is not necessary.
To my knowledge, most of the big names (Bentham, Sidgwick, Mill, Hare, Parfit) were anti-speciesist to some degree; the unusual contribution of Singer is the insistence on equal consideration for nonhumans. It was just not obvious to their audiences for 100+ years afterward.
My understanding of multi-level U is that it permits not using explicit utility estimation, rather than forbidding using it. (U as not the only decision procedure, often too expensive.) It makes sense to read (naive, ideal) single-level consequentialism as the converse, forbidding or discouraging not using U estimation. Is this a straw man? Possibly, I’m not sure I’ve ever read anything by a strict estimate-everything single-level person.
I think using expected values is just one possible decision procedure, one that doesn’t actually follow from utilitarianism and isn’t the same thing as using utilitarianism as a decision procedure. To use utilitarianism as a decision procedure, you’d need to know the actual consequences of your actions, not just a distribution or the expected consequences.
Other animals, which, on account of their interests having been neglected by the insensibility of the ancient jurists, stand degraded into the class of things. … The day has been, I grieve it to say in many places it is not yet past, in which the greater part of the species, under the denomination of slaves, have been treated … upon the same footing as … animals are still. The day may come, when the rest of the animal creation may acquire those rights which never could have been withholden from them but by the hand of tyranny. The French have already discovered that the blackness of skin is no reason why a human being should be abandoned without redress to the caprice of a tormentor. It may come one day to be recognized, that the number of legs, the villosity of the skin, or the termination of the os sacrum, are reasons equally insufficient for abandoning a sensitive being to the same fate. What else is it that should trace the insuperable line? Is it the faculty of reason, or perhaps, the faculty for discourse?...the question is not, Can they reason? nor, Can they talk? but, Can they suffer? Why should the law refuse its protection to any sensitive being?… The time will come when humanity will extend its mantle over everything which breathes…
Mill distinguished between higher and lower pleasures to avoid the charge that utilitarianism is “philosophy for swine”, but still wrote, from that Wiki page section you cite,
Granted that any practice causes more pain to animals than it gives pleasure to man; is that practice moral or immoral? And if, exactly in proportion as human beings raise their heads out of the slough of selfishness, they do not with one voice answer ‘immoral’, let the morality of the principle of utility be for ever condemned.
The section also doesn’t actually mention any theories for “Humans alone”.
I’d also say that utilitarianism is often grounded with a theory of utility, in such a way that anything capable of having utility in that way counts. So, there’s no legwork to do; it just follows immediately that animals count as long as they’re capable of having that kind of utility. By default, utilitarianism is “non-speciesist”, although the theory of utility and utilitarianism might apply differently roughly according to species, e.g. if only higher pleasures or rational preferences matter, and if nonhuman animals can’t have these, this isn’t “speciesist”.
Maybe substitute “guilty” for “responsible”?
There is a part of me which dislikes you presenting utilitarism which includes animals as the standard form of utilitarism. I think that utilitarianism + nonspeciesm falls under the “right but not trivial” category, and that a lot of legwork has to be done before you can get people to accept it, and further that this legwork must be done, instead of sliding over the inferential distance. Because of this, I’d prefer you to disambiguate between versions of utilitarianism which aggregate over humans, and those who aggregate over all sentient/conscious beings, and maybe point out how this developed over time (i.e., Peter Singer had to come and make the argument forcefully, because before it was not obvious)? For example, the Wikipedia entry on utilitarianism has a whole section on Humans alone, or other sentient beings?.
Similarly, maybe you would also want to disambiguate a little bit more between effective altruism and utilitarianism, and explicitly mention it when you’re linking it to effective altruism websites, or use effective altruism examples?
Also, what’s up with attributing the veil of ignorance to Harsanyi but not mentioning Rawls?
The section on Multi-level Utilitarianism Versus Single-level Utilitarianism seems exceedingly strange. In particular, you can totally use utilitarianism as a decision procedure (and if you don’t, what’s the point?). The fact that you don’t have the processing power of a supercomputer and perfect information doesn’t mean that you can’t approximate it as best you can.
For example, if I buy eggs which come from less shitty farmers, or if I decide to not buy eggs in order to reduce factory farming, I’m using utilitarianism as a decision procedure. Even though I can’t discern the exact effects of the action, I can discern that the action has positive expected value.
I don’t fall into recursive loops trying to compute how much compute I should use to compute the expected value of an action because I’m not an easily disabled robot in a film. But I do sometimes go up several levels of recursion, depending on the importance of the decision. I use heuristics like I use low degree Taylor polynomials.
(I also don’t always instantiate utilitarianism. But when I do, I do use it as a decision procedure)
I have different intuitions which strongly go in the other direction.
Thank you for your comment!
My impression is that the major utilitarian academics were rather united in extending equal moral consideration to non-human animals (in line with technicalities’ comment). I’m not aware of any influential attempts to promote a version of utilitarianism that explicitly does not include the wellbeing of non-human animals (though, for example, a preference utilitarian may give different weight to some non-human animals than a hedonistic utilitarian would). In the future, I hope we’ll be able to add more content to the website on the link between utilitarianism and anti-speciesism, with the intention of bridging the inferential distance to which you rightly point.
In the section on effective altruism on the website, we already explicitly disambiguate between EA and utilitarianism. I don’t currently see the need to e.g. add a disclaimer when we link to GiveWell’s website on Utilitarianism.net, but we do include disclaimers when we link to one of the organisations co-founded by Will (e.g. “Note that Professor William MacAskill, coauthor of this website, is a cofounder of 80,000 Hours.”)
We hope to produce a longer article on how the Veil of Ignorance argument relates to utilitarianism at some point. We currently include a footnote on the website, saying that “This [Veil of Ignorance] argument was originally proposed by Harsanyi, though nowadays it is more often associated with John Rawls, who arrived at a different conclusion.” For what it’s worth, Harsanyi’s version of the argument seems more plausible than Rawls’ version. Will commented on this matter in his first appearance on the 80,000 Hours Podcast, saying that “I do think he [Rawls] was mistaken. I think that Rawls’s Veil of Ignorance argument is the biggest own goal in the history of moral philosophy. I also think it’s a bit of a travesty that people think that Rawls came up with this argument. In fact, he acknowledged that he took it from Harsayni and changed it a little bit.”
Historically, one of the major criticisms of utilitarianism was that it supposedly required us to calculate the expected consequences of our actions all the time, which would indeed be impractical. However, this is not true, since it conflates using utilitarianism as a decision procedure and as a criterion or rightness. The section on multi-level utilitarianism aims to clarify this point. Of course, multi-level utilitarianism does still permit attempting to calculate the expected consequences of ones actions in certain situations, but it makes it clear that doing so all the time is not necessary.
For more information on this topic, I recommend Amanda Askell’s EA Forum post “Act utilitarianism: criterion of rightness vs. decision procedure”.
Harsanyi’s version also came first IIRC, and Rawls read it before he wrote his version. (Edit: Oh yeah you already said this)
To my knowledge, most of the big names (Bentham, Sidgwick, Mill, Hare, Parfit) were anti-speciesist to some degree; the unusual contribution of Singer is the insistence on equal consideration for nonhumans. It was just not obvious to their audiences for 100+ years afterward.
My understanding of multi-level U is that it permits not using explicit utility estimation, rather than forbidding using it. (U as not the only decision procedure, often too expensive.) It makes sense to read (naive, ideal) single-level consequentialism as the converse, forbidding or discouraging not using U estimation. Is this a straw man? Possibly, I’m not sure I’ve ever read anything by a strict estimate-everything single-level person.
I think using expected values is just one possible decision procedure, one that doesn’t actually follow from utilitarianism and isn’t the same thing as using utilitarianism as a decision procedure. To use utilitarianism as a decision procedure, you’d need to know the actual consequences of your actions, not just a distribution or the expected consequences.
Classical utilitarianism, as developed by Bentham, was anti-speciesist, although some precursors and some theories that followed may not have been. Bentham already made the argument to include nonhuman animals in the first major work on utilitarianism:
Mill distinguished between higher and lower pleasures to avoid the charge that utilitarianism is “philosophy for swine”, but still wrote, from that Wiki page section you cite,
The section also doesn’t actually mention any theories for “Humans alone”.
I’d also say that utilitarianism is often grounded with a theory of utility, in such a way that anything capable of having utility in that way counts. So, there’s no legwork to do; it just follows immediately that animals count as long as they’re capable of having that kind of utility. By default, utilitarianism is “non-speciesist”, although the theory of utility and utilitarianism might apply differently roughly according to species, e.g. if only higher pleasures or rational preferences matter, and if nonhuman animals can’t have these, this isn’t “speciesist”.