I downvoted this, and would feel strange not talking about why:
I think there are lots of good reasons, moral or otherwise, to not be vegan—maybe you can’t afford vegan food, or otherwise cannot access it. Maybe you’ve never heard of veganism. Maybe there are good reasons to think that the animal products you’re eating aren’t causing additional harm. Maybe you just like animal products a lot, and want to eat some, even though you know it is bad.
But I don’t think this argument is a particularly good one, and doesn’t engage with questions of animal ethics well:
1. “I think there’s a very large chance they don’t matter at all, and that there’s just no one inside to suffer”—this strikes me (for birds and mammals at least) as a statement in direct conflict with a large body of scientific evidence, and to some extent, consensus views among neuroscientists (e.g. the Cambridge Declaration on Consciousness https://en.wikipedia.org/wiki/Animal_consciousness#Cambridge_Declaration_on_Consciousness). Though to be fair, you are assuming they do feel pain in this post.
2. Your weights for animals lives seem fairly arbitrary. I agree that if those were good weights to use, maybe the moral trade-offs would be justified, but if you’re just saying, with little basis, that a pig has 1⁄100 human moral worth, I don’t know how to evaluate it. It isn’t an argument. It’s just an arbitrary discount to make your actions feel justified from a utilitarian standpoint.
I also think these moral worth statements need more clarification—do you mean that while I (a human) feel things on the scale of −1000 to 1000, a pig only feels things on the scale of −10 to 10? Or do you mean a pig is somehow worth less intrinsically, even though it feels similar amounts of pain as me? The first statement I am skeptical of because of a lack of evidence for it, and the second seems just unjustifiably biased against pigs for no particular reason.
I generally think factory farms are pretty bad, and maybe as bad as torture. Removing cows from the equation, eating animal products requires 6.125 beings to be tortured per year per American (by the numbers you shared). I personally don’t think that is a worthwhile thing to cause. Randomly assigning small moral weights to those animals to feel justified seems unscientific and odd.
I think it seems fairly clear that there is a strong case to be made, if you’re someone who has the means and access to vegan food and are a utilitarian of various sorts, to eat at least a mostly vegan diet. No one has to be perfectly moral all the time, and I think it’s probably okay (on average) to often not be perfectly moral. But presenting arbitrarily assigned discounts on lives until your actions are morally justified is a weak justification.
“I think there’s a very large chance they don’t matter at all, and that there’s just no one inside to suffer”—this strikes me (for birds and mammals at least) as a statement in direct conflict with a large body of scientific evidence, and to some extent, consensus views among neuroscientists (e.g. the Cambridge Declaration on Consciousness https://en.wikipedia.org/wiki/Animal_consciousness#Cambridge_Declaration_on_Consciousness).
I think that the Cambridge Declaration on Consciousness is weak evidence for the claim that this is a “consensus view among neuroscientists”.
1. The document reads more like a political document than a scientific document. (See e.g. this commentary.)
2. As far as I can tell, the declaration was signed by a small number of people, perhaps about 15 people, and thus hardly demonstrates a “scientific consensus.”
3. Several of the signers of the declaration have since written scientific papers that seem to treat cortex-required views as a live possibility, e.g. Koch et al. (2016) and Laureys et al. (2015), p. 427.
While you’re right that the Cambridge Declaration on Consciousness was signed by few people, they were mostly very prominent and influential researchers, which was the point of the thing. But yeah, it is weak evidence on its own, I agree.
I don’t know of specific survey data, but based on both the declaration and its continued influence, and the wide variety of opinions, literature reviews, etc supporting the position, my impression is that there is somewhat of a consensus, though there are occasional outliers. I believe my “to some extent, consensus” accurately captures the state of the field. Though in either case it is beside the point since Jeff assumed them to be sentient for the post. Thanks for sharing! :)
Hi Abraham! Thanks for pointing out that it would be helpful to clarify what is meant by the tradeoff values.
I differ on this point:
if you’re just saying, with little basis, that a pig has 1⁄100 human moral worth, I don’t know how to evaluate it. It isn’t an argument. It’s just an arbitrary discount to make your actions feel justified from a utilitarian standpoint.
I think we should give Jeff the benefit of the doubt here. I don’t think his estimates are arbitrary. I think they are honest reflections of the conclusions he has come to given his experience and his understanding of the evidence.
It would be nice to hear more about Jeff’s rationale. But in terms of community norms, I’d like to keep space open for people who want to present novel arguments without having to exhaustively justify every premise.
Yeah, that’s fair—I was not charitable in my original comment RE whether or not there is a rationale behind those estimates, when perhaps I ought to assume there is one. But I guess part of my point is that because this argument entirely hinges on a rationale, not providing it just makes this seem very sketchy.
While I don’t think human experiences and animal experiences are comparable in this direct a way, as an illustration imagine me making a post that said, “I think humans in other countries are worth 1⁄10 of those in my own country, therefore it seems like more of a priority to help those in my own country”, and providing no reasoning or clarification for that discount. You would be justified in being very skeptical of the argument I was making, and to view my argument as low quality, even though there might be a variety of other good reasons to prioritize helping those in my own country. I don’t think that kind of statement is high enough quality on its own to be entertained or to support an argument. But at its core, that’s the argument in this post. I’d be interested in talking about the reasons behind those discounts, but without them, there just isn’t even a way to engage with this argument that I think is productive.
For the record, I generally don’t think it is a major wrong to not be vegan, and wouldn’t downvote / be this critical of someone voicing something along the lines of “I really like how meat tastes, so am not vegan,” etc. I am more critical here because it is an attempt to make a moral justification of not eating a vegan diet, and I think that argument not only fails, but also doesn’t attempt to defend or explain core premises and assumptions, especially when aspects of those premises seem contrary to some degree of scientific evidence / consensus, which strike me to broadly be taken seriously as part of the community norms.
That being said, I think it’s fully possible there are good justifications for having such large discounts on the moral worth of animals, and those discounts are worth discussing. But that was glossed over here, which is why I am responding more critically.
Do the weights really affect the argument? I think Jeff is saying that being omnivorous results in ~6 additional animals alive at any given point. If an animal’s existence on a farm is as bad as one human in the developing world is good (a pretty non-speciesist weighting), then it’s $600 to go vegan.
$600 is admittedly much more than $0.43, but my guess is that Jeff still would rather donate the $600.
My post describes a model for thinking about when it makes sense to be vegan, and how I apply it in my case. My specific numbers are much less useful to other people, and I’m not claiming that I’ve found the one true best estimate. Ways the post can be useful include (a) discussion over whether this is a good model to be using and (b) discussion over how people think about these sort of relative numbers.
You’re right that the post doesn’t argue for my specific numbers on comparing animals and humans: they’re inputs to the model. On the other hand, I do think that if we surveyed the general population on how they would make tradeoffs between human life and animal suffering these would be within the typical range, and these aren’t numbers I’ve chosen to get a specific outcome.
I also think these moral worth statements need more clarification
I phrased these as “averting how many animal-years on a factory farm do I see as being about as good as giving a human another year of life?” As in, if you gave me a choice between the two, which do I prefer. This seems pretty carefully specified to me, and clear enough that someone else could give their own numbers and we could figure out where our largest differences are?
eating animal products requires 6.125 beings to be tortured per year per American. I personally don’t think that is a worthwhile thing to cause.
This kind of argument has issues with demandingness. Here’s a parallel argument: renting a 1br apartment for yourself instead of splitting a 2br with someone kills ~6 people a year because you could be donating the difference. (Figuring a 1br costs $2k/m and a 2br costs $3k/m. This gives a delta of $11k, and GiveWell gives a best guess of ~$1700 for “Cost per outcome as good as averting the death of an individual under 5 — AMF”). Is that a worthwhile thing to cause?
In general, I think the model EAs should be using for thinking about giving things up is to figure out how much sacrifice we’re willing to make, and then figure out for that level of sacrifice what options do the most good. Simply saying “X has harm and so we should not do it” turns into “if there’s anything that you don’t absolutely need, or anything you consume where there’s a slightly less harmful version, you must stop”.
I appreciate your thoughtful response to my post, and think I unintentionally came across harshly. I think you and I likely disagree on how much to weight the moral worth of animals, and what that entails about what we ought to do. But my discomfort with this post is (I hope, though of course I have subconscious biases) is specifically with the non-clarified statements about comparative moral worth between humans and other species. I made my comment to clarify that the reason I voted this down is that I think it is a very bad community standard to blanket accept statements of the sort “I think that these folk X are worth less than these other folk Y” (not a direct quote from you obviously) without stating precisely why one believes that or justifying that claim. That genuinely feels like a dangerous precedent to have, and without context, ought to be viewed with a lot of skepticism. Likewise, if I made an argument where I assumed but did not defend the claim that people different than me are worth 1/10th people like me, you likely ought to downvote it, regardless of the value of the model I might be presenting for thinking about an issue.
One small side note—I feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animals’ moral worth in these discussions. Most members of the public, myself included, aren’t experts in either moral philosophy nor animal sentience. And, we also know that most members of the public don’t view veganism as worthwhile to do. Using this data as evidence that animals have less moral worth strikes me as doing something analogous to saying “most people who care more about their families than others, when surveyed, seem to believe that people outside their families are worth less morally. On those grounds, I ought to think that people outside my family are worth less morally”. This kind of survey provides information on what people think about animals, but in no way is evidence of the moral status of animals. But, this might be the moral realist in me, and/or an inclination toward believing that moral value is something individuals have, and not something assigned to them by others :).
I feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animals’ moral worth in these discussions
Let’s say I’m trying to convince someone that they shouldn’t donate to animal charities or malaria net distribution, but instead they should be trying to prevent existential risk. I bring up how many people there could potentially be in the future (“astronomical stakes”) as a reason for why they should care a lot about those people getting a chance to exist. If they have a strong intuition that people in the far future don’t matter, though, this isn’t going to be very persuasive. I can try to convince them that they should care, drawing on other intuitions that they do have, but it’s likely that existential risk just isn’t a high priority by their values. Them saying they think there’s only a 0.1% chance or whatever that people 1000 years from now matter is useful for us getting on the same page about their beliefs, and I think we should have a culture of sharing this kind of thing.
On some questions you can get strong evidence, and intuitions stop mattering. If I thought we shouldn’t try to convince people to go vegan because diet is strongly cultural and trying to change people’s diet is hopeless, we could run a controlled trial and get a good estimate for how much power we really do have to influence people’s diet. On other questions, though, it’s much harder to get evidence, and that’s where I would place the moral worth of animals and people in the far future. In these cases you can still make progress by your values, but people are less likely to agree with each other about what those values should be.
(I’m still very curious what you think of my demandingness objection to your argument above)
I included the “I think there’s a very large chance they don’t matter at all, and that there’s just no one inside to suffer” out of transparency. The post doesn’t depend on it at all
I don’t see how that can be true. Surely the weightings you give would be radically different if you thought there was “someone inside to suffer”?
So I think that once you accept a particular framing or ontology, or cluster of beliefs, vegetarianism starts to begin souding pretty obvious. One such cluster might be:
Moral realism: There is an objective and scientific answer to how much a pig’s life is worth compared to a human. Ethics is at its best an investigation into the nature of reality, from which moral obligations follow.
Kant is cool. The answer to “why should I do good?” is “because I must”.
Peter Singer ideas: Pain and suffering are extremely important. Negative utilitarianism. Sentience over sapience. Speciesim as being wrong.
Realizing that, deep down, care about animals a great amount.
...
And you seem to be arguing from a framing similar to the above. However, that framing is not obvious, and one could adopt some other cluster of beliefs, such as:
Moral relativism: There isn’t an objective and scientific answer to many moral questions. Many ethical questions or concepts are not well defined, and are best resolved by introspecting on your preferences. Morality is at its best is a coordination game played in good faith.
Gendlin is cool. The answer to “why do I strive to do good?” is “because I want”, or “because I choose to”.
Enlightenment humanism: Human flourishing. Sapience over sentience. Preference utilitarianism among humans.
Realizing that, deep down, you care about animals a small amount.
...
And when arguing with someone which has beliefs near the second cluster, I don’t think that assuming that beliefs in the first cluster are obviously right is a great tactical move (I’m ignoring audience effects). In fact, when I used to not be vegetarian, I found that kind of move to be extremely annoying, and to some extent I still do (“that guy is saying that things which took me years to understand and/or come to share, and which in some cases are still not clear to me, are obviously true?”).
Instead, may I suggest a moral trade as a tactical move? (see: Morality at its best is a coordination game played in good faith)
You (@abrahamrowe) donate $4.3 (a factor of x10 because of your deep magnanimity) to @Jeff_Kaufman’s best human existential risk reduction charity (easily another factor of x10 according to long-termist assumptions)
Jeff_Kaufman tries being vegetarian for a year (or changes his numbers above).
Considering this type of moral trade is possible because the original poster quantified his preferences to the best of his ability. This should be highly lauded, and gets a strong upvote from me.
While I think moral trades are interesting, I don’t know why you would expect me to see $4.30 going to an existential risk charity to be enough for it to be worth me going vegetarian for a year over? I’d much rather donate $4.30 myself and not change my diet.
I think you’re conflating “Jeff sees $0.43/y to a good charity as being clearly better than averting the animal suffering due to omnivorous eating” and “Jeff only selfishly values eating animal products at $0.43/y”?
If anyone’s genuinely interested in this, I’ll switch my diet from eating organic meat ~5x a week to completely vegan in exchange for a donation to the Against Malaria Foundation. £10 per week.
(I think that’s a bad deal for everyone except AMF—there are way better things you can invest in if you care about animals welfare—but I would genuinely do it!)
I agree that the direct effect on animals seems pretty low for the cost compared to EAA charities. I think most of the value would be from getting you to go vegan for a few months or however long it takes for the diet to feel easy/automatic for you, for the chance that you might stick with it, reduce your consumption more or increase your concern for animals in the long term. I think I remember you saying somewhere you’ve been vegetarian before (correct me if I’m wrong), so I’m not sure an experiment with veganism would make much difference in the long-term.
Also, there are EAs who are both already inclined to donate to AMF and concerned about animal welfare, so you might want to specify counterfactual donations. :)
I agree that I was assuming a certain moral framework in my post—I’ve updated it to refer explicitly to utilitarianism of some kind, since that’s a fairly common view in EA.
I downvoted this, and would feel strange not talking about why:
I think there are lots of good reasons, moral or otherwise, to not be vegan—maybe you can’t afford vegan food, or otherwise cannot access it. Maybe you’ve never heard of veganism. Maybe there are good reasons to think that the animal products you’re eating aren’t causing additional harm. Maybe you just like animal products a lot, and want to eat some, even though you know it is bad.
But I don’t think this argument is a particularly good one, and doesn’t engage with questions of animal ethics well:
1. “I think there’s a very large chance they don’t matter at all, and that there’s just no one inside to suffer”—this strikes me (for birds and mammals at least) as a statement in direct conflict with a large body of scientific evidence, and to some extent, consensus views among neuroscientists (e.g. the Cambridge Declaration on Consciousness https://en.wikipedia.org/wiki/Animal_consciousness#Cambridge_Declaration_on_Consciousness). Though to be fair, you are assuming they do feel pain in this post.
2. Your weights for animals lives seem fairly arbitrary. I agree that if those were good weights to use, maybe the moral trade-offs would be justified, but if you’re just saying, with little basis, that a pig has 1⁄100 human moral worth, I don’t know how to evaluate it. It isn’t an argument. It’s just an arbitrary discount to make your actions feel justified from a utilitarian standpoint.
I also think these moral worth statements need more clarification—do you mean that while I (a human) feel things on the scale of −1000 to 1000, a pig only feels things on the scale of −10 to 10? Or do you mean a pig is somehow worth less intrinsically, even though it feels similar amounts of pain as me? The first statement I am skeptical of because of a lack of evidence for it, and the second seems just unjustifiably biased against pigs for no particular reason.
I generally think factory farms are pretty bad, and maybe as bad as torture. Removing cows from the equation, eating animal products requires 6.125 beings to be tortured per year per American (by the numbers you shared). I personally don’t think that is a worthwhile thing to cause. Randomly assigning small moral weights to those animals to feel justified seems unscientific and odd.
I think it seems fairly clear that there is a strong case to be made, if you’re someone who has the means and access to vegan food and are a utilitarian of various sorts, to eat at least a mostly vegan diet. No one has to be perfectly moral all the time, and I think it’s probably okay (on average) to often not be perfectly moral. But presenting arbitrarily assigned discounts on lives until your actions are morally justified is a weak justification.
I think that the Cambridge Declaration on Consciousness is weak evidence for the claim that this is a “consensus view among neuroscientists”.
From Luke Muehlhauser’s 2017 Report on Consciousness and Moral Patienthood:
While you’re right that the Cambridge Declaration on Consciousness was signed by few people, they were mostly very prominent and influential researchers, which was the point of the thing. But yeah, it is weak evidence on its own, I agree.
I don’t know of specific survey data, but based on both the declaration and its continued influence, and the wide variety of opinions, literature reviews, etc supporting the position, my impression is that there is somewhat of a consensus, though there are occasional outliers. I believe my “to some extent, consensus” accurately captures the state of the field. Though in either case it is beside the point since Jeff assumed them to be sentient for the post. Thanks for sharing! :)
Hi Abraham! Thanks for pointing out that it would be helpful to clarify what is meant by the tradeoff values.
I differ on this point:
I think we should give Jeff the benefit of the doubt here. I don’t think his estimates are arbitrary. I think they are honest reflections of the conclusions he has come to given his experience and his understanding of the evidence.
It would be nice to hear more about Jeff’s rationale. But in terms of community norms, I’d like to keep space open for people who want to present novel arguments without having to exhaustively justify every premise.
Yeah, that’s fair—I was not charitable in my original comment RE whether or not there is a rationale behind those estimates, when perhaps I ought to assume there is one. But I guess part of my point is that because this argument entirely hinges on a rationale, not providing it just makes this seem very sketchy.
While I don’t think human experiences and animal experiences are comparable in this direct a way, as an illustration imagine me making a post that said, “I think humans in other countries are worth 1⁄10 of those in my own country, therefore it seems like more of a priority to help those in my own country”, and providing no reasoning or clarification for that discount. You would be justified in being very skeptical of the argument I was making, and to view my argument as low quality, even though there might be a variety of other good reasons to prioritize helping those in my own country. I don’t think that kind of statement is high enough quality on its own to be entertained or to support an argument. But at its core, that’s the argument in this post. I’d be interested in talking about the reasons behind those discounts, but without them, there just isn’t even a way to engage with this argument that I think is productive.
For the record, I generally don’t think it is a major wrong to not be vegan, and wouldn’t downvote / be this critical of someone voicing something along the lines of “I really like how meat tastes, so am not vegan,” etc. I am more critical here because it is an attempt to make a moral justification of not eating a vegan diet, and I think that argument not only fails, but also doesn’t attempt to defend or explain core premises and assumptions, especially when aspects of those premises seem contrary to some degree of scientific evidence / consensus, which strike me to broadly be taken seriously as part of the community norms.
That being said, I think it’s fully possible there are good justifications for having such large discounts on the moral worth of animals, and those discounts are worth discussing. But that was glossed over here, which is why I am responding more critically.
Do the weights really affect the argument? I think Jeff is saying that being omnivorous results in ~6 additional animals alive at any given point. If an animal’s existence on a farm is as bad as one human in the developing world is good (a pretty non-speciesist weighting), then it’s $600 to go vegan.
$600 is admittedly much more than $0.43, but my guess is that Jeff still would rather donate the $600.
Upvoted for sharing your reason for downvoting. I wish people did this more often!
My post describes a model for thinking about when it makes sense to be vegan, and how I apply it in my case. My specific numbers are much less useful to other people, and I’m not claiming that I’ve found the one true best estimate. Ways the post can be useful include (a) discussion over whether this is a good model to be using and (b) discussion over how people think about these sort of relative numbers.
I included the “I think there’s a very large chance they don’t matter at all, and that there’s just no one inside to suffer” out of transparency. ( https://www.facebook.com/jefftk/posts/10100153860544072?comment_id=10100153864306532 ) The post doesn’t depend on it at all, and everything is conditional on animals mattering.
You’re right that the post doesn’t argue for my specific numbers on comparing animals and humans: they’re inputs to the model. On the other hand, I do think that if we surveyed the general population on how they would make tradeoffs between human life and animal suffering these would be within the typical range, and these aren’t numbers I’ve chosen to get a specific outcome.
I phrased these as “averting how many animal-years on a factory farm do I see as being about as good as giving a human another year of life?” As in, if you gave me a choice between the two, which do I prefer. This seems pretty carefully specified to me, and clear enough that someone else could give their own numbers and we could figure out where our largest differences are?
This kind of argument has issues with demandingness. Here’s a parallel argument: renting a 1br apartment for yourself instead of splitting a 2br with someone kills ~6 people a year because you could be donating the difference. (Figuring a 1br costs $2k/m and a 2br costs $3k/m. This gives a delta of $11k, and GiveWell gives a best guess of ~$1700 for “Cost per outcome as good as averting the death of an individual under 5 — AMF”). Is that a worthwhile thing to cause?
In general, I think the model EAs should be using for thinking about giving things up is to figure out how much sacrifice we’re willing to make, and then figure out for that level of sacrifice what options do the most good. Simply saying “X has harm and so we should not do it” turns into “if there’s anything that you don’t absolutely need, or anything you consume where there’s a slightly less harmful version, you must stop”.
I appreciate your thoughtful response to my post, and think I unintentionally came across harshly. I think you and I likely disagree on how much to weight the moral worth of animals, and what that entails about what we ought to do. But my discomfort with this post is (I hope, though of course I have subconscious biases) is specifically with the non-clarified statements about comparative moral worth between humans and other species. I made my comment to clarify that the reason I voted this down is that I think it is a very bad community standard to blanket accept statements of the sort “I think that these folk X are worth less than these other folk Y” (not a direct quote from you obviously) without stating precisely why one believes that or justifying that claim. That genuinely feels like a dangerous precedent to have, and without context, ought to be viewed with a lot of skepticism. Likewise, if I made an argument where I assumed but did not defend the claim that people different than me are worth 1/10th people like me, you likely ought to downvote it, regardless of the value of the model I might be presenting for thinking about an issue.
One small side note—I feel confused about why the surveys of how the general public view animals are being cited as evidence in favor of casual estimations of animals’ moral worth in these discussions. Most members of the public, myself included, aren’t experts in either moral philosophy nor animal sentience. And, we also know that most members of the public don’t view veganism as worthwhile to do. Using this data as evidence that animals have less moral worth strikes me as doing something analogous to saying “most people who care more about their families than others, when surveyed, seem to believe that people outside their families are worth less morally. On those grounds, I ought to think that people outside my family are worth less morally”. This kind of survey provides information on what people think about animals, but in no way is evidence of the moral status of animals. But, this might be the moral realist in me, and/or an inclination toward believing that moral value is something individuals have, and not something assigned to them by others :).
Let’s say I’m trying to convince someone that they shouldn’t donate to animal charities or malaria net distribution, but instead they should be trying to prevent existential risk. I bring up how many people there could potentially be in the future (“astronomical stakes”) as a reason for why they should care a lot about those people getting a chance to exist. If they have a strong intuition that people in the far future don’t matter, though, this isn’t going to be very persuasive. I can try to convince them that they should care, drawing on other intuitions that they do have, but it’s likely that existential risk just isn’t a high priority by their values. Them saying they think there’s only a 0.1% chance or whatever that people 1000 years from now matter is useful for us getting on the same page about their beliefs, and I think we should have a culture of sharing this kind of thing.
On some questions you can get strong evidence, and intuitions stop mattering. If I thought we shouldn’t try to convince people to go vegan because diet is strongly cultural and trying to change people’s diet is hopeless, we could run a controlled trial and get a good estimate for how much power we really do have to influence people’s diet. On other questions, though, it’s much harder to get evidence, and that’s where I would place the moral worth of animals and people in the far future. In these cases you can still make progress by your values, but people are less likely to agree with each other about what those values should be.
(I’m still very curious what you think of my demandingness objection to your argument above)
I don’t see how that can be true. Surely the weightings you give would be radically different if you thought there was “someone inside to suffer”?
The post doesn’t depend on it, because the post is all conditional on animals mattering a nonzero amount (“to be safe I’ll assume they do [matter]”).
If he thinks there’s no one inside to suffer, then it’s worth sacrificing an infinite number of chickens for the convenience of one person.
These numbers are presumably based on the idea that chickens are their own, independent, semi-conscious beings.
So I think that once you accept a particular framing or ontology, or cluster of beliefs, vegetarianism starts to begin souding pretty obvious. One such cluster might be:
Moral realism: There is an objective and scientific answer to how much a pig’s life is worth compared to a human. Ethics is at its best an investigation into the nature of reality, from which moral obligations follow.
Kant is cool. The answer to “why should I do good?” is “because I must”.
Peter Singer ideas: Pain and suffering are extremely important. Negative utilitarianism. Sentience over sapience. Speciesim as being wrong.
Realizing that, deep down, care about animals a great amount.
...
And you seem to be arguing from a framing similar to the above. However, that framing is not obvious, and one could adopt some other cluster of beliefs, such as:
Moral relativism: There isn’t an objective and scientific answer to many moral questions. Many ethical questions or concepts are not well defined, and are best resolved by introspecting on your preferences. Morality is at its best is a coordination game played in good faith.
Gendlin is cool. The answer to “why do I strive to do good?” is “because I want”, or “because I choose to”.
Enlightenment humanism: Human flourishing. Sapience over sentience. Preference utilitarianism among humans.
Realizing that, deep down, you care about animals a small amount.
...
And when arguing with someone which has beliefs near the second cluster, I don’t think that assuming that beliefs in the first cluster are obviously right is a great tactical move (I’m ignoring audience effects). In fact, when I used to not be vegetarian, I found that kind of move to be extremely annoying, and to some extent I still do (“that guy is saying that things which took me years to understand and/or come to share, and which in some cases are still not clear to me, are obviously true?”).
Instead, may I suggest a moral trade as a tactical move? (see: Morality at its best is a coordination game played in good faith)
You (@abrahamrowe) donate $4.3 (a factor of x10 because of your deep magnanimity) to @Jeff_Kaufman’s best human existential risk reduction charity (easily another factor of x10 according to long-termist assumptions)
Jeff_Kaufman tries being vegetarian for a year (or changes his numbers above).
Considering this type of moral trade is possible because the original poster quantified his preferences to the best of his ability. This should be highly lauded, and gets a strong upvote from me.
While I think moral trades are interesting, I don’t know why you would expect me to see $4.30 going to an existential risk charity to be enough for it to be worth me going vegetarian for a year over? I’d much rather donate $4.30 myself and not change my diet.
I think you’re conflating “Jeff sees $0.43/y to a good charity as being clearly better than averting the animal suffering due to omnivorous eating” and “Jeff only selfishly values eating animal products at $0.43/y”?
If anyone’s genuinely interested in this, I’ll switch my diet from eating organic meat ~5x a week to completely vegan in exchange for a donation to the Against Malaria Foundation. £10 per week.
(I think that’s a bad deal for everyone except AMF—there are way better things you can invest in if you care about animals welfare—but I would genuinely do it!)
I agree that the direct effect on animals seems pretty low for the cost compared to EAA charities. I think most of the value would be from getting you to go vegan for a few months or however long it takes for the diet to feel easy/automatic for you, for the chance that you might stick with it, reduce your consumption more or increase your concern for animals in the long term. I think I remember you saying somewhere you’ve been vegetarian before (correct me if I’m wrong), so I’m not sure an experiment with veganism would make much difference in the long-term.
Also, there are EAs who are both already inclined to donate to AMF and concerned about animal welfare, so you might want to specify counterfactual donations. :)
Yes, I meant counterfactual donations, and yes I’ve spent a couple months vegetarian before. Good points both! :)
I agree that I was assuming a certain moral framework in my post—I’ve updated it to refer explicitly to utilitarianism of some kind, since that’s a fairly common view in EA.
Thanks for the moral trade idea!