So I think that once you accept a particular framing or ontology, or cluster of beliefs, vegetarianism starts to begin souding pretty obvious. One such cluster might be:
Moral realism: There is an objective and scientific answer to how much a pig’s life is worth compared to a human. Ethics is at its best an investigation into the nature of reality, from which moral obligations follow.
Kant is cool. The answer to “why should I do good?” is “because I must”.
Peter Singer ideas: Pain and suffering are extremely important. Negative utilitarianism. Sentience over sapience. Speciesim as being wrong.
Realizing that, deep down, care about animals a great amount.
...
And you seem to be arguing from a framing similar to the above. However, that framing is not obvious, and one could adopt some other cluster of beliefs, such as:
Moral relativism: There isn’t an objective and scientific answer to many moral questions. Many ethical questions or concepts are not well defined, and are best resolved by introspecting on your preferences. Morality is at its best is a coordination game played in good faith.
Gendlin is cool. The answer to “why do I strive to do good?” is “because I want”, or “because I choose to”.
Enlightenment humanism: Human flourishing. Sapience over sentience. Preference utilitarianism among humans.
Realizing that, deep down, you care about animals a small amount.
...
And when arguing with someone which has beliefs near the second cluster, I don’t think that assuming that beliefs in the first cluster are obviously right is a great tactical move (I’m ignoring audience effects). In fact, when I used to not be vegetarian, I found that kind of move to be extremely annoying, and to some extent I still do (“that guy is saying that things which took me years to understand and/or come to share, and which in some cases are still not clear to me, are obviously true?”).
Instead, may I suggest a moral trade as a tactical move? (see: Morality at its best is a coordination game played in good faith)
You (@abrahamrowe) donate $4.3 (a factor of x10 because of your deep magnanimity) to @Jeff_Kaufman’s best human existential risk reduction charity (easily another factor of x10 according to long-termist assumptions)
Jeff_Kaufman tries being vegetarian for a year (or changes his numbers above).
Considering this type of moral trade is possible because the original poster quantified his preferences to the best of his ability. This should be highly lauded, and gets a strong upvote from me.
While I think moral trades are interesting, I don’t know why you would expect me to see $4.30 going to an existential risk charity to be enough for it to be worth me going vegetarian for a year over? I’d much rather donate $4.30 myself and not change my diet.
I think you’re conflating “Jeff sees $0.43/y to a good charity as being clearly better than averting the animal suffering due to omnivorous eating” and “Jeff only selfishly values eating animal products at $0.43/y”?
If anyone’s genuinely interested in this, I’ll switch my diet from eating organic meat ~5x a week to completely vegan in exchange for a donation to the Against Malaria Foundation. £10 per week.
(I think that’s a bad deal for everyone except AMF—there are way better things you can invest in if you care about animals welfare—but I would genuinely do it!)
I agree that the direct effect on animals seems pretty low for the cost compared to EAA charities. I think most of the value would be from getting you to go vegan for a few months or however long it takes for the diet to feel easy/automatic for you, for the chance that you might stick with it, reduce your consumption more or increase your concern for animals in the long term. I think I remember you saying somewhere you’ve been vegetarian before (correct me if I’m wrong), so I’m not sure an experiment with veganism would make much difference in the long-term.
Also, there are EAs who are both already inclined to donate to AMF and concerned about animal welfare, so you might want to specify counterfactual donations. :)
I agree that I was assuming a certain moral framework in my post—I’ve updated it to refer explicitly to utilitarianism of some kind, since that’s a fairly common view in EA.
So I think that once you accept a particular framing or ontology, or cluster of beliefs, vegetarianism starts to begin souding pretty obvious. One such cluster might be:
Moral realism: There is an objective and scientific answer to how much a pig’s life is worth compared to a human. Ethics is at its best an investigation into the nature of reality, from which moral obligations follow.
Kant is cool. The answer to “why should I do good?” is “because I must”.
Peter Singer ideas: Pain and suffering are extremely important. Negative utilitarianism. Sentience over sapience. Speciesim as being wrong.
Realizing that, deep down, care about animals a great amount.
...
And you seem to be arguing from a framing similar to the above. However, that framing is not obvious, and one could adopt some other cluster of beliefs, such as:
Moral relativism: There isn’t an objective and scientific answer to many moral questions. Many ethical questions or concepts are not well defined, and are best resolved by introspecting on your preferences. Morality is at its best is a coordination game played in good faith.
Gendlin is cool. The answer to “why do I strive to do good?” is “because I want”, or “because I choose to”.
Enlightenment humanism: Human flourishing. Sapience over sentience. Preference utilitarianism among humans.
Realizing that, deep down, you care about animals a small amount.
...
And when arguing with someone which has beliefs near the second cluster, I don’t think that assuming that beliefs in the first cluster are obviously right is a great tactical move (I’m ignoring audience effects). In fact, when I used to not be vegetarian, I found that kind of move to be extremely annoying, and to some extent I still do (“that guy is saying that things which took me years to understand and/or come to share, and which in some cases are still not clear to me, are obviously true?”).
Instead, may I suggest a moral trade as a tactical move? (see: Morality at its best is a coordination game played in good faith)
You (@abrahamrowe) donate $4.3 (a factor of x10 because of your deep magnanimity) to @Jeff_Kaufman’s best human existential risk reduction charity (easily another factor of x10 according to long-termist assumptions)
Jeff_Kaufman tries being vegetarian for a year (or changes his numbers above).
Considering this type of moral trade is possible because the original poster quantified his preferences to the best of his ability. This should be highly lauded, and gets a strong upvote from me.
While I think moral trades are interesting, I don’t know why you would expect me to see $4.30 going to an existential risk charity to be enough for it to be worth me going vegetarian for a year over? I’d much rather donate $4.30 myself and not change my diet.
I think you’re conflating “Jeff sees $0.43/y to a good charity as being clearly better than averting the animal suffering due to omnivorous eating” and “Jeff only selfishly values eating animal products at $0.43/y”?
If anyone’s genuinely interested in this, I’ll switch my diet from eating organic meat ~5x a week to completely vegan in exchange for a donation to the Against Malaria Foundation. £10 per week.
(I think that’s a bad deal for everyone except AMF—there are way better things you can invest in if you care about animals welfare—but I would genuinely do it!)
I agree that the direct effect on animals seems pretty low for the cost compared to EAA charities. I think most of the value would be from getting you to go vegan for a few months or however long it takes for the diet to feel easy/automatic for you, for the chance that you might stick with it, reduce your consumption more or increase your concern for animals in the long term. I think I remember you saying somewhere you’ve been vegetarian before (correct me if I’m wrong), so I’m not sure an experiment with veganism would make much difference in the long-term.
Also, there are EAs who are both already inclined to donate to AMF and concerned about animal welfare, so you might want to specify counterfactual donations. :)
Yes, I meant counterfactual donations, and yes I’ve spent a couple months vegetarian before. Good points both! :)
I agree that I was assuming a certain moral framework in my post—I’ve updated it to refer explicitly to utilitarianism of some kind, since that’s a fairly common view in EA.
Thanks for the moral trade idea!