Launching Utilitarianism.net: An Introductory Online Textbook on Utilitarianism
We are excited to announce the launch of Utilitarianism.net, an introductory online textbook on utilitarianism, co-created by William MacAskill, James Aung and me over the past year.
https://www.utilitarianism.net/
The website aims to provide a concise, accessible and engaging introduction to modern utilitarianism, functioning as an online textbook targeted at the undergraduate level . We hope that over time this will become the main educational resource for students and anyone else who wants to learn about utilitarianism online. The content of the website aims to be understandable to a broad audience, avoiding philosophical jargon where possible and providing definitions where necessary.
Please note that the website is still in beta. We plan to produce an improved and more comprehensive version of this website by September 2020. We would love to hear your feedback and suggestions on what we could change about the website or add to it.
The website currently has articles on the following topics and we aim to add further content in the future:
We are particularly grateful for the help of the following people with reviewing, writing, editing or otherwise supporting the creation of Utilitarianism.net: Lucy Hampton, Stefan Schubert, Pablo Stafforini, Laura Pomarius, John Halstead, Tom Adamczewski, Jonas Vollmer, Aron Vallinder, Ben Pace, Alex Holness-Tofts, Huw Thomas, Aidan Goth, Chi Nguyen, Eli Nathan, Nadia Mir-Montazeri and Ivy Mazzola.
The following is a partial reproduction of the Introduction to Utilitarianism article from Utilitarianism.net. Please note that it does not include the footnotes, further resources, and the sections on Arguments in Favor of Utilitarianism and Objections to Utilitarianism. If you are interested in the full version of the article, please read it on the website.
Introduction to Utilitarianism
“The utilitarian doctrine is, that happiness is desirable, and the only thing desirable, as an end; all other things being only desirable as means to that end.”
- John Stuart Mill
Utilitarianism was developed to answer the question of which actions are right and wrong, and why. Its core idea is that we ought to act to improve the wellbeing of everyone by as much as possible. Compared to other ethical theories, it is unusually demanding and may require us to make substantial changes to how we lead our lives. Perhaps more so than any other ethical theory, it has caused a fierce philosophical debate between its proponents and critics.
Why Do We Need Moral Theories?
When we make moral judgments in everyday life, we often rely on our intuition. If you ask yourself whether or not it is wrong to eat meat, or to lie to a friend, or to buy sweatshop goods, you probably have a strong gut moral view on the topic. But there are problems with relying merely on our moral intuition.
Historically, people held beliefs we now consider morally horrific. In Western societies, it was once firmly believed to be intuitively obvious that people of color and women have fewer rights than white men; that homosexuality is wrong; and that it was permissible to own slaves. We now see these moral intuitions as badly misguided. This historical track record gives us reason to be concerned that we, in the modern era, may also be unknowingly guilty of serious, large-scale wrongdoing. It would be a very lucky coincidence if the present generation were the first generation whose intuitions were perfectly morally correct.
Also, people have conflicting moral intuitions about what things are right and wrong. So, we need a way to resolve these disagreements. The project of moral philosophy is to reflect on our competing moral intuitions and develop a theory that will tell us which actions are right or wrong, and why. This will then allow us to identify which moral judgments of today are misguided, enabling us to make moral progress and act more ethically.
One of the most prominent and influential attempts to create such a theory is utilitarianism. Utilitarianism was developed by the philosophers Jeremy Bentham and John Stuart Mill, who drew on ideas going back to the ancient Greeks. Their utilitarian views have been widely discussed since and have had a significant influence in economics and public policy.
Explaining What Utilitarianism Is
The core idea of utilitarianism is that we ought to act to improve the wellbeing of everyone by as much as possible.
A more precise definition of utilitarianism is as follows:
Utilitarianism is the family of ethical theories on which the rightness of actions (or rules, policies, etc.) depends on, and only on, the sum total of wellbeing they produce.
Sometimes philosophers talk about “welfare” or “utility” rather than “wellbeing”, but these words are typically used to mean the same thing. Utilitarianism is most commonly applied to evaluate the rightness of actions, but the theory can also evaluate other things, like rules, policies, motives, virtues, and social institutions. It is perhaps unfortunate that the clinical-sounding term “utilitarianism” caught on as a name, especially since in common speech the word “utilitarian” is easily confused with joyless functionality or even outright selfishness.
All ethical theories belonging to the utilitarian family share four defining characteristics: (i) consequentialism, (ii) welfarism, (iii) impartiality, and (iv) additive aggregationism.
Consequentialism is the view that the moral rightness of actions (or rules, policies, etc.) depends on, and only on, the value of their consequences.
Welfarism is the view that only the welfare (also called wellbeing) of individuals determines how good a particular state of the world is.
Impartiality is the view that the identity of individuals is irrelevant to the value of an outcome.
Additive Aggregationism is the view that the value of the world is given by the sum of the values of its parts, where these parts are some kind of local phenomena such as experiences, lives, or societies.
The major rivals to utilitarianism are philosophies that deny one or more of the above four core principles. For example, they might believe that actions can be inherently right or wrong regardless of their consequences, or that some outcomes are good even if they do not increase the welfare of any individual, or that morality allows us to be partial towards our friends and families.
We cover the core principles of utilitarianism and its variants in greater depth in a separate article.
Classical Utilitarianism
The early utilitarians—Jeremy Bentham, John Stuart Mill, and Henry Sidgwick—were classical utilitarians. Classical utilitarianism is distinctive from other utilitarian theories because it accepts these two additional principles: First, it accepts hedonism as a theory of welfare. Hedonism is the view that wellbeing consists of positive and negative conscious experiences. For readability, we will call positive conscious experiences happiness and negative conscious experiences suffering. Second, classical utilitarianism accepts totalism as a theory of population ethics. Totalism is the view that one outcome is better than another if and only if it contains a greater sum total of wellbeing, where wellbeing can be increased either by making people better off or increasing the number of people with good lives.
Classical utilitarianism can be defined as follows:
Classical utilitarianism is the ethical theory on which the rightness of actions (or rules, policies, etc.) depends on, and only on, the sum total of happiness over suffering they produce.
Utilitarianism and Practical Ethics
Utilitarianism is a demanding ethical theory that may require us to substantially change how we act. Utilitarianism says that we should make helping others a very significant part of our lives. In helping others, we should try to use our resources to do the most good, impartially considered, that we can.
According to utilitarianism, we should extend our moral concern to all sentient beings, meaning every individual capable of experiencing positive or negative conscious states. On this basis, a priority for utilitarians may be to help society to continue to widen its moral circle of concern. For instance, we may want to persuade people they should help not just those in their own country, but also on the other side of the world; not just those of their own species but all sentient creatures; and not just people currently alive but any people whose lives they can affect.
Despite having a radically different approach to ethics than commonsense morality, utilitarianism generally endorses commonsense prohibitions. For practical purposes, the best course of action for a utilitarian is to try to do as much good as possible whilst still acting in accordance with commonsense moral virtues—like integrity, trustworthiness, law-abidingness, and fairness.
We discuss the implications of utilitarianism for practical ethics in a separate article.
Acting on Utilitarianism
There are many problems in the world today, some of which are extremely large in scale. Unfortunately, our resources are scarce, so as individuals and even as a global society we cannot solve all the world’s problems at once. This means we must decide how to prioritize the resources we have. Not all ways of helping others are equally effective. By the lights of utilitarianism, we should choose carefully which moral problems to work on and by what means, based on where we can do the most good. This involves taking seriously the question of how we can best use our time and money to help others. Once again, utilitarianism urges us to consider the wellbeing of all individuals regardless of what species they belong to, what country they live in, and at what point in time they exist. With this in mind, a few moral problems appear especially pressing:
Global Health and Development. Those in affluent countries are typically one hundred times richer than the poorest seven hundred million people in the world. Also, we can radically improve the lives of the extreme poor, such as by providing basic medical care, at very little cost.
Factory Farming. Tens of billions of non-human animals are kept in horrific conditions in factory farms, undergoing immense unnecessary suffering. We could radically decrease this suffering at very little cost to society.
Existential Risks. There will be vast numbers of people in the future, and their lives could be very good. Yet technological progress brings risks, such as from climate change, nuclear war, synthetic biology and artificial intelligence, that could endanger humanity’s future. But if we can successfully navigate these risks, we can ensure a flourishing world for trillions of people yet to come.
There are three key means of helping those affected by the above moral concerns: donating money to effective charities, working in an impactful career, and convincing other people to do the same. For example, donations to the most effective global health charities are expected to save a human life for as little as $2,300; this money may go even further when donated to address factory farming or existential risks. Choosing which career to pursue may be even more important again, since some careers allow us to do far more good than others.
In a separate article, we discuss what utilitarianism means for how we should act.
[The full version of this article includes two additional sections here, one on Arguments in Favor of Utilitarianism, the other on Objections to Utilitarianism.]
Conclusion
What matters most for utilitarianism is bringing about the best consequences for the world. This involves improving the wellbeing of all individuals, regardless of their gender, race, species, and their geographical or temporal location. Against this background, three key concerns for utilitarianism are helping the global poor, improving farmed animal welfare, and ensuring that the future goes well over the long term. Since utilitarianism is unusually demanding, it may require us to make benefiting others the main focus of our lives.
All utilitarian theories share the four core principles of consequentialism, welfarism, impartiality, and additive aggregationism. The original and most influential version of utilitarianism is classical utilitarianism, which encompasses two further characteristics: hedonism and totalism. Hedonism is the view that wellbeing consists entirely of conscious experiences, such as happiness or suffering. Totalism is the view that one outcome is better than another if and only if it contains a greater sum total of wellbeing.
- some concerns with classical utilitarianism by 14 Nov 2020 9:29 UTC; 32 points) (
- 6 Apr 2020 23:00 UTC; 2 points) 's comment on Official EA Forum Feedback Survey by (
Would you be interested in having a section on the website that is basically “Ways to be an EA while not being a utilitarian?” I say this as someone who is very committed to EA but very against utilitarianism. Fair enough if the answer is no, but if the answer is yes, I’d be happy to help out with drafting the section.
Nitpick: This quote here seems wrong/misleading: “What matters most for utilitarianism is bringing about the best consequences for the world. This involves improving the wellbeing of all individuals, regardless of their gender, race, species, and their geographical or temporal location.”
What do you mean by “this involves?” If you mean “This always involves” it is obviously false. If you mean “this typically involves” then it might be true, but I am pretty sure I could convince you it is false also. For example, very often more utility will be created if you abandon some people—even some entire groups of people—as lost causes and focus on creating more happy people instead. Most importantly, if you mean “for us today, it typically involves” it is also false, because creating a hedonium shockwave dramatically decreases the wellbeing of most individuals on Earth, at least for a short period before they die. :P
(You may be able to tell from the above some of the reasons why I think utilitarianism is wrong!)
I think that would be better on another website, one specifically dedicated to EA and not utilitarianism. Possibly Utilitarianism.net could link to it. Maybe an article for https://www.effectivealtruism.org/ ?
On the nitpick, I agree that the wording is misleading. Bringing people into existence is not usually understood to “improve their welfare”, since someone who doesn’t exist has no welfare (not even welfare 0). It’s probably better to say “benefit”, although it’s also a question for philosophy whether you can benefit someone by bringing them into existence.
Also, even “improve” isn’t quite right to me if we’re being person-affecting, since it suggests their welfare will be higher than before, but we only mean higher than otherwise.
Anyhow, thanks for the consideration. Yeah, maybe I’ll write a blog post on the subject someday.
This might be relevant:
“The world destruction argument” by Simon Knutsson, with appendix here.
My nitpick was not about the nonexistence stuff, it was about hurting and killing people.
I had this in mind:
A hedonium shockwave can involve a lot of killing, as you suggest.
I’m very excited about this! I think if I had something like this in high school (similar formatting and tone, but a lot more content), I’d be a lot less lost then and would have found my way to doing useful stuff/thinking in an utilitarian way several years earlier!
Negative utilitarianism (NU) isn’t mentioned anywhere on the website, AFAIS. This ethical view has quite a few supporters among thinkers, and unlike classical utilitarianism (CU) NU appears satiable (“maximize happiness” vs “minimize misery”). There are subtypes like weak NU (lexical NU and lexical threshold NU), consent-based NU, and perhaps OPIS’ “xNU+”.
Are there reasons for the omission?
I’d be more excited about seeing some coverage of suffering-focused ethics in general, rather than NU specifically. I think NU is a fairly extreme position, but the idea that suffering is the dominant component of the expected utility of the future is both consistent with standard utilitarian positions, and also captures the key point that most EA NU thinkers are making.
I also agree and would like to see discussion of hedonistic/preference NU and SFE more generally.
I don’t think it quite captures the key point. The key point is working to prevent suffering, which “symmetric” utilitarians often do. It’s possible the future is positive in expectation, but it’s best for a symmetric utilitarian to work on suffering, and it’s possible that the future is negative in expectation, but it’s best for them to work on pleasure or some other good.
Symmetric utilitarians might sometimes try to improve a situation by creating lots of happy individuals rather than addressing any of the suffering, and someone with suffering-focused views (including NU) might find this pointless and lacking in compassion for those who suffer.
Good point. Thank you.
Even classical utilitarianism can belong to the umbrella term of suffering-focused ethics if its supporters agree that we should still focus on reducing suffering in practice (for its neglectedness, relative easiness of prevention, as a common ground with other ethical views, etc).
I’m surprised by the downvotes. There’s a page on types of utilitarianism, and NU is not mentioned, but “variable value theories, critical level theories and person-affecting views” are at least named, and NU seems better known than variable value and critical level theories. Average utilitarianism also isn’t mentioned.
My impression of variable value theories and critical level theories is that these are mostly academic theories, constructed as responses to the repugnant conclusion and other impossibility results, and pretty ad hoc for this purpose, with little independent motivation and little justification for their exact forms. Exactly where should be the critical level? Exactly what should the variable value function look like? They don’t seem to be brought up much in the literature except in papers actually developing different versions of them or in comparing different theories. Maybe my impression is wrong.
It seems as though some of the discussion assumes classical utilitarianism (or at least uses CU as a synecdoche for utilitarian theories as a whole?) But, as the authors themselves acknowledge, some utilitarian theories aren’t hedonistic or totalist (or symmetrical, another unstated difference between CU and other utilitarian theories).
It is also a bit misleading to say that “many effective altruists are not utilitarians and care intrinsically about things besides welfare, such as rights, freedom, equality, personal virtue and more.” On some theories, these things are components of welfare.
And it is not necessarily true that “Utilitarians would reason that if there are enough people whose headaches you can prevent, then the total wellbeing generated by preventing the headaches is greater than the total wellbeing of saving the life, so you are morally required to prevent the headaches.” The increase in wellbeing from saving the life might be lexically superior to the increase in wellbeing from preventing the headache.
It’s discussed a bit here:
Some objections worth covering (EDIT: on the objections page), although not necessarily applicable to all versions:
1. Mere receptacles/vessels objection, replaceability, separateness of persons, and tradeoffs between the suffering of one and the pleasure of others
2. Headaches vs lives (dust specks vs torture)
3. Infinite ethics: no good solutions?
EDIT:
4. Population ethics: impossibility theorems, paradoxes, no good solutions? (inspired by antimonyanthony)
Do non-utilitarian moral theories have readily available solutions to infinite ethics either? Suggesting infinite ethics as an objection I think only makes sense if it’s a particular problem for utilitarianism, or at least a worse problem for utilitarianism than for anything else.
I’d also recommend the very repugnant conclusion as an important objection (at least to classical or symmetric utilitarianism).
I think it isn’t a problem in the first place for non-consequentialist theories, because the problem comes from trying to compare infinite sets of individuals with utilities when identities (including locations in spacetime) aren’t taken to matter at all, but you could let identities matter in certain ways and possibly get around it this way. I think it’s generally a problem for consequentialist theories, utilitarian or not.
It’s worth considering that avoiding it (Weak Quality Addition) is one of several intuitive conditions in an important impossibility theorem (of which there are many similar ones, including the earlier one which is cited in the post you cite), which could be a response to the objection.
EDIT: Or maybe the impossibility theorems and paradoxes should be taken to be objections to consequentialism generally, because there’s no satisfactory way to compare outcomes generally, so we shouldn’t rely purely on comparing outcomes to guide actions.
Ah, that’s fair—I think I was mistaking the technical usage of “infinite ethics” for a broader class of problems involving infinities in ethics in general. Deonotological theories sometimes imply “infinite” badness of actions, which can have counterintuitive implications as discussed by MacAskill in his interviews with 80k, which is why I was confused by your objection.
In case you’re missing context before you vote on my comment, they have a page for objections.
Maybe substitute “guilty” for “responsible”?
There is a part of me which dislikes you presenting utilitarism which includes animals as the standard form of utilitarism. I think that utilitarianism + nonspeciesm falls under the “right but not trivial” category, and that a lot of legwork has to be done before you can get people to accept it, and further that this legwork must be done, instead of sliding over the inferential distance. Because of this, I’d prefer you to disambiguate between versions of utilitarianism which aggregate over humans, and those who aggregate over all sentient/conscious beings, and maybe point out how this developed over time (i.e., Peter Singer had to come and make the argument forcefully, because before it was not obvious)? For example, the Wikipedia entry on utilitarianism has a whole section on Humans alone, or other sentient beings?.
Similarly, maybe you would also want to disambiguate a little bit more between effective altruism and utilitarianism, and explicitly mention it when you’re linking it to effective altruism websites, or use effective altruism examples?
Also, what’s up with attributing the veil of ignorance to Harsanyi but not mentioning Rawls?
The section on Multi-level Utilitarianism Versus Single-level Utilitarianism seems exceedingly strange. In particular, you can totally use utilitarianism as a decision procedure (and if you don’t, what’s the point?). The fact that you don’t have the processing power of a supercomputer and perfect information doesn’t mean that you can’t approximate it as best you can.
For example, if I buy eggs which come from less shitty farmers, or if I decide to not buy eggs in order to reduce factory farming, I’m using utilitarianism as a decision procedure. Even though I can’t discern the exact effects of the action, I can discern that the action has positive expected value.
I don’t fall into recursive loops trying to compute how much compute I should use to compute the expected value of an action because I’m not an easily disabled robot in a film. But I do sometimes go up several levels of recursion, depending on the importance of the decision. I use heuristics like I use low degree Taylor polynomials.
(I also don’t always instantiate utilitarianism. But when I do, I do use it as a decision procedure)
I have different intuitions which strongly go in the other direction.
Thank you for your comment!
My impression is that the major utilitarian academics were rather united in extending equal moral consideration to non-human animals (in line with technicalities’ comment). I’m not aware of any influential attempts to promote a version of utilitarianism that explicitly does not include the wellbeing of non-human animals (though, for example, a preference utilitarian may give different weight to some non-human animals than a hedonistic utilitarian would). In the future, I hope we’ll be able to add more content to the website on the link between utilitarianism and anti-speciesism, with the intention of bridging the inferential distance to which you rightly point.
In the section on effective altruism on the website, we already explicitly disambiguate between EA and utilitarianism. I don’t currently see the need to e.g. add a disclaimer when we link to GiveWell’s website on Utilitarianism.net, but we do include disclaimers when we link to one of the organisations co-founded by Will (e.g. “Note that Professor William MacAskill, coauthor of this website, is a cofounder of 80,000 Hours.”)
We hope to produce a longer article on how the Veil of Ignorance argument relates to utilitarianism at some point. We currently include a footnote on the website, saying that “This [Veil of Ignorance] argument was originally proposed by Harsanyi, though nowadays it is more often associated with John Rawls, who arrived at a different conclusion.” For what it’s worth, Harsanyi’s version of the argument seems more plausible than Rawls’ version. Will commented on this matter in his first appearance on the 80,000 Hours Podcast, saying that “I do think he [Rawls] was mistaken. I think that Rawls’s Veil of Ignorance argument is the biggest own goal in the history of moral philosophy. I also think it’s a bit of a travesty that people think that Rawls came up with this argument. In fact, he acknowledged that he took it from Harsayni and changed it a little bit.”
Historically, one of the major criticisms of utilitarianism was that it supposedly required us to calculate the expected consequences of our actions all the time, which would indeed be impractical. However, this is not true, since it conflates using utilitarianism as a decision procedure and as a criterion or rightness. The section on multi-level utilitarianism aims to clarify this point. Of course, multi-level utilitarianism does still permit attempting to calculate the expected consequences of ones actions in certain situations, but it makes it clear that doing so all the time is not necessary.
For more information on this topic, I recommend Amanda Askell’s EA Forum post “Act utilitarianism: criterion of rightness vs. decision procedure”.
Harsanyi’s version also came first IIRC, and Rawls read it before he wrote his version. (Edit: Oh yeah you already said this)
To my knowledge, most of the big names (Bentham, Sidgwick, Mill, Hare, Parfit) were anti-speciesist to some degree; the unusual contribution of Singer is the insistence on equal consideration for nonhumans. It was just not obvious to their audiences for 100+ years afterward.
My understanding of multi-level U is that it permits not using explicit utility estimation, rather than forbidding using it. (U as not the only decision procedure, often too expensive.) It makes sense to read (naive, ideal) single-level consequentialism as the converse, forbidding or discouraging not using U estimation. Is this a straw man? Possibly, I’m not sure I’ve ever read anything by a strict estimate-everything single-level person.
I think using expected values is just one possible decision procedure, one that doesn’t actually follow from utilitarianism and isn’t the same thing as using utilitarianism as a decision procedure. To use utilitarianism as a decision procedure, you’d need to know the actual consequences of your actions, not just a distribution or the expected consequences.
Classical utilitarianism, as developed by Bentham, was anti-speciesist, although some precursors and some theories that followed may not have been. Bentham already made the argument to include nonhuman animals in the first major work on utilitarianism:
Mill distinguished between higher and lower pleasures to avoid the charge that utilitarianism is “philosophy for swine”, but still wrote, from that Wiki page section you cite,
The section also doesn’t actually mention any theories for “Humans alone”.
I’d also say that utilitarianism is often grounded with a theory of utility, in such a way that anything capable of having utility in that way counts. So, there’s no legwork to do; it just follows immediately that animals count as long as they’re capable of having that kind of utility. By default, utilitarianism is “non-speciesist”, although the theory of utility and utilitarianism might apply differently roughly according to species, e.g. if only higher pleasures or rational preferences matter, and if nonhuman animals can’t have these, this isn’t “speciesist”.
The article by D Meissner (original post) , the comments on it, and the expanded set of articles on the website utilitarianism , all make good points.
My comments:
1. Specieism. I knew people who lived on farms and raised animals and also grew corn, wheat, and vegetables for food, and also to sell to get cash so they could buy things they couldn’t produce on their farm which they decided they either needed or wanted. (For example a car, truck , phone, or a college education for their children (eg my grandparents, or some of the families in small rural towns i used to live in).
While they were definately not vegans or even vegetarians, many were often not exactly ’specieist—they valued very much the welfare of their cows, chickens, and the wildlife (such as deer , turkeys, trout, bears, bobcats, skunks, weasels, hedgehogs, snakes, etc.) , flora (eg forests), and ecosystems (eg trout stream valleys and the mountains above them) . They took very good care of their animals until they killed or sold them, and they would not tolerate people doing illegal logging to cut down forests to sell as lumber to make paper for newspapers , or use their creeks and valleys as trash dumps and sewers.
Some of the people (even in the same families) decided the only thing that mattered was money—so they permitted illegal dumping, logging, etc. in return for cash, and basically left (with their cash) the areas they grew up in as ecological wastelands , and moved to the city to get a ‘better life’ including a college education and a ‘good job’. the book book ‘hillbilly elegy’ by j d vance shows that perspective.
Some people from cities who do have money have bought up alot of that land and abandoned properties in small towns and now are creating organic farms and vegan restaurants in those areas (while those areas were not really suitable for the kind of large scale ‘factory farming’ except for chickens—there were some cattle but it was nothing like the industrial feedlots of the midwest USA; there were a few (tiny) fish farms—people had a trout pond which was the fish farm equivalent of a small vegetable—they also had small vegetable gardens and sold the produce in front of their homes—and it was a ‘trust system’—they just left the vegetables and the ‘cash register’ on a table (where you put your money—they sold really cheap produce and almost nobody stole anything ( a few teens and preteens might steal stuff, and also break into houses, but it was unusual, and in general people knew who thy were ad just talked to the parents to tell them to ‘behave’).
I heard that has changed now, partly because hard drugs (methamphetamine, heroin) have been introduced into those areas. The level of trust has gone down. (ts near to what is called the ‘heroin highway’). its possible the new organic farms and vegan restaurants can entice the local people to start caring about things beyond drugs, chicken and money, but thats an open question—some of the old time local people resent people they do not know who have alot of money buying up all the property.
while i know most EAs probably hate hunting, the ’old timers from that area (eg anyone over age 20 though it goes up to over 80) hunted and fished for food in part—bass, trout, deer, grouse, turkeys, squirrels, etc. Its brutal, but so is buying a car and driving 60 miles round trip to work in a recycling plant so they can make cash and eat burgers at a Mcdonald’s .
I view Thoreau and Albert Schweitzer as promoting vegetarianism and anti-specieism long beofre p singer.
2. This article and website appears to be written from a philosophical POV. I learned the little i know about utilitarianism from my background as a student in biology—which turned into physics (to study modern biology, you have to take physics, which i did, all the way up into quantum theory and statistical mechanics and a bit of QFT. Once you take those, you realize from a literature search that many of the famous physicists actually wrote papers on utilitarianism (as well as biology). As did economists (who studied biology and physics).
I consider myself a utilitarian—but maybe i should use a different term. (In USA this is like saying you are a ‘socialist’ (some people interpret this as meaning you support Bernie Saunders for president, while others say this means you worship Stalin.
I also consider myself a darwinist (though many interpret this to mean what is called ‘vulgar darwinism’ which is not what darwin said—ie the idea that we are the ‘additive’ sum of our genes.). In a sense i’m also a marxist (but not a vulgar marxist—who are common—who think the world is explained as ‘class struggle’. The LTV has a kernel of truth to it—eg bitcoin).
The 4 postulates of utilitarianism are not what is meant by the term for last 20-40 years though some economists still use that formalism—both ‘left wing’ and ‘right wing’ ones. The first 1 and the 4th are the most explcitly outdated—‘additivity’ in physics went out decades ago, as did ‘consequentialism’ (ther e is a newer term (which actually goes way back but became more popular or rediscovered after 1990-2000).