President of the Effective Altruism group at the University of Melbourne.
Currently studying a BSc & Concurrent Diploma (Pure Math & Neuroscience). Hoping to work on AI alignment in the future.
President of the Effective Altruism group at the University of Melbourne.
Currently studying a BSc & Concurrent Diploma (Pure Math & Neuroscience). Hoping to work on AI alignment in the future.
My understanding is that the self-effacing utilitarian is not strictly an ‘ex-utilitarian’, in that they are still using the same types of rightness criteria as a utilitarian (at least with respect to world-states). Although they may try to deceive themselves into actually believing another theory, since this would better achieve their rightness criterion, that is not the same as abandoning utilitarianism on the basis that it was somehow refuted by certain events. In other words, as you say, they’re switching theories “on consequentialist grounds”. Hence they’re still a consequentialist in the sense that is philosophically important here.
Brilliant post. Thanks for writing it. I just want to add to what you said about ethics. It seems that evaluating whether an action / event is good or bad itself presupposes an ethical theory.[1] Hence I think a lot of the claims that are being made can be described as either (a) this event shows vividly how strongly utilitarianism can conflict with ‘common-sense morality’ (or our intuitions)[2] or (b) trying to follow[3] utilitarianism tends to lead to outcomes which are bad by the lights of utilitarianism (or perhaps some other theory). The first of these seems not particularly interesting to me, as suggested in your post, and the second is a separate point entirely—but is nonetheless often being presented as a criticism of utilitarianism.
Someone else made this point before me in another post but I can’t find their comment.
But note that this applies mostly to naive act utilitarianism.
By which I mean ‘act in accordance’ with, but it’s worth noting that this is pretty underdetermined. For instance, doing EV calculations is not the only way to act in accordance with utilitarianism.
I believe the ‘walls of text’ that Adrian is referring to are mine. I’d just like to clarify that I was not trying to collapse the distinction between a decision procedure and the rightness criterion of utilitarianism. I was merely arguing that the concept of expected value can be used both to decide what action should be taken (at least in certain circumstances)[1] and whether an action is / was morally right (arguably in all circumstances) - indeed, this is a popular formulation of utilitarianism. I was also trying to point out that whether an action is good, ex ante, is not necessarily identical to whether the consequences of that action are good, ex post. If anyone wants more detail you can view my comments here.
Although usually other decision procedures, like following general rules, are more advisable, even if one maintains the same rightness criterion.
I’m not sure I agree with this. As far as I can tell the EA community has always been quite focused on being inclusive, kind and welcoming—see for instance this and this post from CEA, which are both years old. I’m very sorry to hear about the OP’s experiences of course, and honestly surprised personally since my own experience has been a lot more positive. However, this doesn’t automatically imply to me that we need a whole new community or something to that effect.
I would see this more as presenting an opportunity to improve our culture and amend any failures that our currently happening despite the efforts of a lot of community leaders. I don’t think there’s a ‘fundamental flaw’ in how the EA community is trying to operate in that respect. Also it seems to me that distancing the EA brand in this way you’re suggesting would potentially incentivize it to become even less human and amiable—because then it would be distinguished by being the ‘weird, rationalist / philosophical community’. (Not to mention that it would seemingly decrease opportunities for collaboration with the ‘other community’ and create confusion for those looking to get involved in EA.)
Edit: Just to be clear, I’m not making any general claims here about how successful the EA community has been in implementing the ideals I mentioned above. Obviously this post points to updating against that.
No worries. It is interesting though that you think my comment is a great example when it was meant to be a rebuttal. What I’m trying to say is, I wouldn’t really identify as a ‘utilitarian’ myself, so I don’t think I really have a vested interest in this debate. Nonetheless, I don’t think utilitarianism ‘breaks down’ in this scenario, as you seem to be suggesting. I think very poorly-formulated versions do, but those are not commonly defended, and with some adjustments utilitarianism can accomodate most of our intuitions very well (including the ones that are relevant here). I’m also not sure what the basis is of the suggestion that utilitarianism works worse when a situation is more unique and there is more context to factor in.
To reiterate, I think the right move is (progressive) adjustments to a theory, and moral uncertainty (where relevant), which both seem significantly more rational than particularism. It’s very unclear to me how we can know that it’s ‘impossible or unworkable’ to find a system that would guide our thinking in all situations. Indeed some versions of moral uncertainty already seem to do this pretty well. I also would object to classifying moral uncertainty as an ‘ad-hoc patch’. It wasn’t initially developed to better accommodate our intuitions, but simply because as a matter of fact we find ourselves in the position of uncertainty with respect to what moral theory is correct (or ‘preferable’), just like with empirical uncertainty.
I can’t speak for others, but this isn’t the reason I’m defending utilitarianism. I’d be more than happy to fall back on other types of consequentialism, or moral uncertainty, if necessary (in fact I lean much more towards these than utilitarianism in general). I’m defending it simply because I don’t think that the criticisms being raised are valid for most forms of utilitarianism. See my comments below for more detail on that.
That being said, I do think it’s perfectly reasonable to want a coherent ethical theory that can be used universally. Indeed the alternative is generally considered irrational and can lead to various reductios.
Hmm perhaps. I did try to address your points quite directly in my last comment though (e.g. by arguing that EV can be both a decision procedure and a rightness criterion). Could you please explain how I’m talking past you?
No. I meant ‘metaethical framework.’ It is a standard term in moral philosophy. See: https://plato.stanford.edu/entries/metaethics/
I’m aware of the term. I said that because utilitarianism is not a metaethical framework, so I’m not really sure what you are referring to. A metaethical framework would be something like moral naturalism or error theory.
Again, we do not need to bring decision theory into this. I am talking about metaethics here. So I am talking about what makes certain things morally good and certain things morally bad. In the case of utilitarianism, this is defined purely in terms of utility. And expected utility != value.
Metaethics is about questions like what would make a moral statement true, or whether such statements can even be true. It is not about whether a ‘thing’ is morally good or bad: that is normative ethics. And again, I am talking about normative ethics, not decision theory. As I’ve tried to say, expected value is often used as a criterion of rightness, not only a decision procedure. That’s why the term ‘expectational’ or ‘expectable’ utilitarianism exists, which is described in various sources including the IEP. I have to say though at this point I am a little tired of restating that so many times without receiving a substantive response to it.
Compare: we can define wealth as having a high net-worth, and we can say that some actions are better at generating a high net worth. But we need not include these actions in our definitions of the term ‘wealth’. Because being rich != getting rich. The same is true for utilitarianism. What is moral value is nonidentical to any decision procedure.
Yes, the rightness criterion is not necessarily identical to the decision procedure. But many utilitarians believe that actions should be morally judged on the basis of their reasonable EV, and it may turn out that this is in fact identical to the decision procedure (used or recommended). This does not mean it can’t be a rightness criterion. And let me reiterate here, I am talking about whether an action is good or bad, which is different to whether a world-state is good or bad. Utilitarianism can judge multiple types of things.
Also, as I’ve said before, if you in fact wanted to completely discard EV as a rightness criterion, then you would probably want to adjust your decision procedure as well, e.g. to be more risk-averse. The two tend to go hand in hand. I think a lot of the substance of the dilemma you’re presenting comes from rejecting a rightness criterion while maintaining the associated decision procedure, which doesn’t necessarily work well with other rightness criteria.
This is not a controversial point, or a matter of opinion. It is simply a matter of fact that, according to utilitarianism, a state of affairs with high utility is morally good.
I agree with that. What I disagree with is whether that entails that the action that produced that state of affairs was also morally good. This seems to me very non-obvious. Let me give you an extreme example to stress the point:
Imagine a sadist pushes someone onto the road in front of traffic, just for fun (with the expectation that they’ll be hit). Fortunately the car that was going to hit them just barely stops soon enough. The driver of that car happens to be a terrorist who was (counterfactually) going to detonate a bomb in a crowded space later that day, but changes their mind because of the shocking experience (unbeknownst to the sadist). As a result, the terrorist is later arrested by the police before they can cause any harm. This is a major counterfactual improvement in the resulting state of affairs. However, it would seem absurd to me to say that it was therefore good, ex ante, to push the person into oncoming traffic.
I’m guessing you mean ‘normative ethical framework’, not ‘meta-ethical framework’. That aside, what I was trying to say in my comment is that EV theory is not only a criterion for a rational decision, though it can be one,[1] but is often considered also a criterion for what is morally good on utilitarian grounds. See, for instance, this IEP page.
I think your comment addresses something more like objective (or ‘plain’ or ‘actual’) utilitarianism, where all that matters is whether the outcome of an action was in fact net positive ex post, within some particular timeframe, as opposed to whether the EV of the outcome was reasonably deemed net positive ex ante. The former is somewhat of a minority view, to my knowledge, and is subject to serious criticisms. (Not least that it is impossible to know with certainty what the actual consequences of a given action will be.[2])[3]
That being said, I agree that the consequences ex post are still very relevant. Personally I find a ‘dual’ or ‘hybrid’ view like the one described here most plausible, which attempts to reconcile the two dichotomous views. Such a view does not entail that it is morally acceptable to commit an action which is, in reasonable expectation, net negative, it simply accepts that positive consequences could in fact result from this sort of action, despite our expectation, and that these consequences themselves would be good, and we would be glad about them. That does not mean that we should do the action in the first place, or be glad that it occurred.[4]
Actually, I don’t think that’s quite right either. The rationality criterion for decisions is expected utility theory, which is not necessarily the same as expected value in the context of consequentialism. The former is about the utility (or ‘value’) with respect to the individual, whereas the latter is about the value aggregated over all morally relevant individuals affected in a given scenario.
Also, in a scenario where someone reduced existential risk but extinction did in fact occur, objective utilitarianism would state that their actions were morally neutral / irrelevant. This is one of many possible examples that seem highly counterintuitive to me.
Also, if you were an objective consequentialist, it seems you would want to be more risk-averse and less inclined to use raw EV as your decision procedure anyway.
I am not intending to raise the question of ‘fitting attitudes’ with this language, but merely to describe my point about rightness in a more salient way.
To my knowledge the most common rightness criterion of utilitarianism states that an action (or rule, or virtue) is good if, in expectation, it produces net positive value. Generally fraud of any kind does not have a net positive expected value, and it is very hard to distinguish the exceptions[1], if indeed any exist. Hence it is prudent to have a general rule against committing fraud, and I believe this aligns with what Richard is arguing in his post.
Personally I find it very dubious that fraud could ever be sanctioned by this criterion, especially once the damage to defrauded customers and reputational damage is factored in[2]. But let’s imagine, for the sake of discussion, that exceptions do exist and that they can be confidently identified[3]. This could be seen as a flaw of this kind of utilitarianism, e.g. if one has a very strong intuition against illegal actions like fraud[4]. Then one could appeal to other heuristics, such as risk-aversion (which is potentially more compatible with theories like objective utilitarianism) or moral uncertainty, which is my preferred response. I.e. there is a non-trivial possibility that theories like traditional deontology are true, which should also be factored into our decisions (e.g. by way of a moral parliament).
To summarise, I think in any realistic scenario, no reasonable type of utilitarianism will endorse fraud. But even if it somehow does, there are other adequate ways to handle this counter-intuitive conclusion which do not require abandoning utilitarianism altogether.
Edit: I just realised that maybe what you’re saying is more along the lines of “it doesn’t matter if the exceptions can be confidently identified or not, what matters is that they exist at all”. An obvious objection to this is that expected value is generally seen as relative to the agent in question, so it doesn’t really make sense to think of an action as having an ‘objective’ net positive EV.[5] Also, it’s not very relevant to the real-world, since ultimately it’s humans who are making the decisions based on imperfect information (at least at the moment).
This is especially so given the prevalence of bias / motivated reasoning in human reasoning.
And FWIW, I think this is a large part of the reason why a lot of people have such a strong intuition against fraud. It might not even be necessary to devise other explanations.
Just to be clear, I don’t think the ongoing scenario was an exception of this kind.
Although it is easy to question this intuition, e.g. by imagining a situation where defrauding one person is necessary to save a million lives.
If an objective EV could be identified on the basis of perfect information and some small fundamental uncertainty, this would be much more like the actual value of the action than an EV, and leads to absurd conclusions. For instance, any minor everyday action could, through butterfly effects, lead to an extremely evil or extremely good person being born, and thus would have a very large ‘objective EV’, if defined this way.
Ah yes, I see now that your argument rests on less premises than I thought.
Firstly I would echo what Devin said above about this being a flaw of “bullet-biting strong deontic longtermism”. One could seemingly justify basically any action that marginally increases productivity on those grounds (even for a very temporary period of time). That being said, I think there are probably significant positive flow-on effects from veganism too. For one thing, it may increase societal moral progress in expectation. Similarly, there is evidence to suggest that at an individual level it reduces cognitive biases related to speciesism and increases one’s moral consideration for non-humans (as Michael noted above). Compounding benefits from effects like these may well outweigh those of productivity increases.
Also, it’s very unclear to me that being vegan would actually reliably decrease the average person’s productivity, even if it is initially a revealed preference. Obviously this is ultimately an empirical question. However one could make a priori arguments in the other direction too. E.g. perhaps by reducing cognitive dissonance, people tend to feel more happy, and therefore are more productive. Or perhaps caring about a cause like animal welfare increases motivation and feelings of purpose marginally throughout one’s life. This is not to say I agree with any of those speculations, but just to point out that they could be made.
Finally, I think there are probably sound deontological reasons to be vegan, which are important under moral uncertainty, but I won’t get into that too much in this comment. Naturally the same would apply for a lot of the other counterintuitive implications that this form of longtermism would have.
I just want to give some more comments / counterpoints:
Firstly, I think this post may be somewhat exaggerating the actual magnitude by which these diets differ in taste pleasure (on average). My intuition would be that it’s actually quite small (at least after an initial adjustment period) and relatively insignificant compared to other changes people could potentially implement to make themselves ‘happier’ on an everyday level.
(Note that this is ignoring considerations of convenience since they aren’t mentioned much in your argument but I’d be happy to comment on that as well.)
Also, I don’t find myself convinced that one’s preferences can’t change in this case. This is related to the adjustment period I mentioned above. From personal and anecdotal experience I think many things tend to ‘grow on you’ over time, including foods, and this effect seems much more important than ‘consciously deciding’ to change your preferences. Indeed, it does seem pretty implausible that the latter would work in isolation, but I think other factors (like adjustment) are relevant here.
Thanks for this post, it was very useful. However, I have some issues with the article you used to support your claims about conversion rates for alpha-linoleic acid (ALA) to EPA and DHA. Given its quality, I don’t think the conclusion that “the vegan-hostile health gurus happen to be right this time” is warranted.
Firstly, take this paragraph near the beginning of the article:
However, research clearly indicates that the conversion of ALA to EPA and DHA is extremely limited. Less than 5% of ALA gets converted to EPA, and less than 0.5% (one-half of one percent) of ALA is converted to DHA.
There is no study cited for this this.
Also, not long after:
Studies have shown that ALA supplements (like flax oil) are unable to raise plasma DHA levels in vegans, despite low DHA levels at baseline. (ref)
It simply says ‘(ref)’ in brackets, but the reference is nowhere to be found.
I just did some preliminary research on these conversions, and it seems like the evidence is quite mixed. I believe I found the study that the author of that article was looking at[1], but I also found multiple other studies ([2], [3] and [4][2]) which suggest the rates are considerably higher, and in fact high enough for only 1 or 2 grams of ALA to provide enough EPA and DHA.[3]
In summary, I don’t think it’s at all clear that vegetarians or vegans don’t get enough EPA, DHA or DPA, although it is still possible.
Which supports the first, but not the second quoted claim.
This study indicates 5% conversion of ALA to DHA, especially after an extended period of time, which is interesting.
Based on a lot of research, the Australian National Health and Medical Research Council recommends 160 mg per day of combined DHA, EPA and DPA, for adult males, and 90 mg for adult females (source). The said maximum amount would require 3.2 grams of alpha-linoleic acid at a 5% conversion rate, and 1.6 grams at a 10% conversion rate.
In fact arguably even the rate mentioned in the original article is sufficient, since overall it is >5% (and that’s without DPA). The low amount of DHA is somewhat concerning, but there is generally no specific stipulation for a DHA requirement, and it seems this is partially because there is interconversion between the three fatty acids, e.g. from EPA to DHA.
Probably the best way to answer this question is to look at tolerable upper limit estimates for DHA and EPA that have been set by expert organisations. This page states that the US FDA recommends less than 3,000 mg per day of combined EPA, DPA and DHA intake (apparently it is not possibly to separate the three here). This typically would mean that no adverse affects have been observed below that level.
You can optimise for health on a vegan diet as well. The only difference would be any nutrients that are exclusively found in animal products. But, as I stated in my other comment below, I think there are good reasons to believe that it’s unlikely there are any such nutrients with non-negligible health benefits, other than those that we are aware of and can be supplemented. The main reason is all the knowledge we collectively have about the nutrients in the human diet (and the compounds which are important for metabolism in the human body).
I know this is very late, but I felt obliged to reply since I disagree with a lot of the points made in the post you quoted.
First, not all animal products are equal, and the oft-touted pro-veg*n studies overlook these differences. Many of the supposed benefits of veg*n diets seem to come from the exclusion of processed meat, which is meat that has been treated with modern preservatives, flavorings, etc. This is really backed up by studies, not just anti-artificial sentiment. Good studies looking at the health impacts of unprocessed meat (which, I believe, generally includes ground beef) are rare.
Unprocessed red meat is still classified as a type 2A carcinogen by the WHO’s IARC (International Agency for Research on Cancer), meaning it is ‘probably carcinogenic’. Also, fish and other seafood are known to contain relatively high levels of mercury, meaning there are health issues associated with consuming them too much.
Additionally, saturated and trans fats are almost ubiquitous in animal products, often at relatively high levels. Both of these are widely considered to be unhealthy. Indeed, the NAS (Nutritional Academy of Sciences) in the US has stated that there is no safe level of trans fat consumption.
Then there are a select few types of meat which seem particularly healthy, like sardines, liver and marrow, and there is still less reason to believe that they are harmful.
Levels of cadmium (another toxic metal) seem to be typically higher in offal, like liver, than in plant foods (source). This is somewhat unsurprising as heavy metals tend to accumulate in the liver when animals are trying to metabolise them. Also, it seems like if consuming a decent amount of liver would lead to getting far too much of some vitamins and minerals, which is generally not good (e.g. see these composition data).
Second, vegan diets miss out on creatine, omega-3 fat in its proper EHA/DHA form, Vitamin D, taurine, and carnosine.
Humans are very capable of converting ALA (the predominant omega-3 fatty acid found in plant foods) to EPA and DHA (source). It is not hard to get the amount of ALA required for sufficient EPA and DHA, even assuming a somewhat low conversion rate. This amount would typically be a few grams per day. Also, the conversion rate is taken into account in all nutritional guidelines.
Taurine is synthesised in the human body from cysteine. Carnosine is synthesised from histidine, cytosine and uracil. Cysteine and histidine are both found in high amounts in plant proteins (source), while cytosine and uracil, being nucleotide bases, are quite ubiquitous in the body. I think it is reasonable to assume that the human body will produce the amount of taurine and carnosine required for optimal health, even without obtaining them through a diet, and I would place a high prior probability on that. Additionally, the evidence so far suggests that there are not health benefits from having dietary intake or higher levels of these compounds.
For creatine the evidence is a little more mixed, although the majority of evidence still suggests that dietary intake does not provide health benefits. Nonetheless, one may want to supplement it out of an abundance of caution.
You can of course supplement, but at the cost of extra time and money
I don’t think supplementing generally takes more than ~5 minutes extra per day. And generally I find that supplements are a fairly negligible expense. (This is especially the case because some brands have very high amounts of the nutrients in their products and personally I often break up the tablets into smaller proportions.) I can add some more information about costs here if anyone is interested.
For some people who are simply bad at keeping habits—me, at least—supplementing for an important nutrient just isn’t a reliable option; I can set my mind to do it but I predictably fail to keep up with it.
I think anyone who can maintain a typical lifestyle is good enough at keeping habits to utilise supplementation. (Not criticising the OP—maybe they just needed a bit more time to get used to it.) Nonetheless, foods fortified with nutrients like vitamin B12 and vitamin D are widely available, so one could simply use those instead. If this also isn’t an option for some reason, you could just take larger amounts of the supplements once every few days or even once a week, depending on the elimination half-life of the nutrient. To my knowledge, this would work for vitamin B12 and vitamin D (although not for creatine apparently).
Third, vegan/vegetarian diets reduce your flexibility to make other healthy changes. As an omnivore, it’s pretty easy for me to minimize or avoid unhealthy foods such as store-bought bread (with so many preservatives, flavorings etc) and fortified cereal.
Preservatives and flavourings (and additives in general) are not automatically unhealthy, and indeed most of them are perfectly healthy, as they have to pass rigorous safety standards before being approved (and they are generally studied extensively even after they’re approved). As such, store-bought bread and fortified cereals are not unhealthy either, and I would say they’re actually good components of a diet. However, if anyone wants to share evidence to the contrary, feel free.
As a vegetarian or vegan, [avoiding unhealthy foods] would be significantly more difficult.
I find that vegetarian or vegan products are often healthier than the alternatives. This is probably because there seems to be a frequent association between plant-based foods and healthiness in marketing, e.g. because companies think the target demographics overlap significantly (which I could imagine being true).
Finally, nutritional science is frankly a terrible mess, and not necessarily due to ill motives and practices on the part of researchers (though there is some of that) but also because of just how difficult it is to tease out correlation from causation in this business. There’s a lot that we don’t understand, including chemicals that may play a valuable health role but haven’t been properly identified as such. Therefore, in the absence of clear guidance it’s wise to defer to eating (a) a wide variety of foods, which is enhanced by including animal products, and (b) foods that we evolved to eat, which has usually included at least a small amount of meat.
I think this is the strongest argument in the post. However I want to point out that nutrition as a field has existed for a long time and by now we have characterised and studied the majority of nutrients that are present in typical human diets. It seems unlikely that there are nutrients we are unaware of that are exclusively present in animal products and have non-negligible health benefits (from dietary intake). Additionally, there are cohort studies which measure a large number of health outcomes for omnivores and vegetarians and/or vegans, and they haven’t identified any particular negative effects from either of the latter (indeed it’s usually the other way around). The main disadvantage I can think of there is that these studies don’t measure every possible health outcome that may be relevant.
For these reasons, I weakly feel that the healthiest diet will include some meat and/or fish, and feel it more strongly if we consider that someone is spending only a limited amount of time and money on their diet. Of course that doesn’t mean that a typical Western omnivorous diet is superior to a typical Western veg*n diet (it probably isn’t).
I can understand that there may be a small negative expected value from nutrients lacking in a vegan diet that we are currently unaware of. However I think this is plausibly outweighed by the negative expected value from some of the health effects (e.g. those above) that are associated with most animal products, given that these are more probable and seemingly more serious.
Or, to put this another way, your prior for the best diet containing some animal products might initially be quite high, but in light of the evidence against the healthiness of many animal products, I think the probability becomes quite low.
That being said, I disagree that it takes more time and money to create an optimal (or just reasonably good) vegan diet than an optimal omnivorous diet, for reasons listed above, and because I think the latter is significantly more difficult than one might intuitively believe.
I think this is a great idea. Just wanted to flag that we’ve done this with other clubs at the University of Melbourne in the past. To give some concrete examples of how this can achieve quite a lot without a huge amount of time and effort:
We successfully diverted $500 to GiveDirectly on one occasion, from the annual revenue of a club that raises money for charity, simply by attending their AGM and giving a presentation
On another occasion, we joined as a co-host for a charity fundraiser event with several other clubs, and were allowed to select high impact / EA-aligned charities as the recipients for the event, which ended up raising close to $1,200 total
I would definitely encourage EA groups at other universities to try similar things. There could be a lot of low-hanging fruit, e.g. clubs who simply haven’t thought that carefully about their choices of charities before.