Stanford student (math/economics). Formerly intern at Rethink Priorities (animal welfare) and J-PAL South Asia (IDEA Initiative).
Tejas Subramaniam
The fourth objection, on who the victim is, has always seemed like the strongest explanation of the deontological moral difference to me. When you offset your CO2 emissions, you haven‘t actually harmed anyone. (I’m personally inclined to place higher credence on utilitarianism than most other moral theories, so I‘m not too bothered by this, and I also think it’s certainly better than the most plausible alternative – people eat meat but don’t offset it – but regardless, interesting philosophical question.)
The Carlsmith article you linked—post 1 of his two-post series—seems to mostly argue against the standard arguments people might have for ethical anti-realists reasoning about ethics (i.e., he argues that neither a brute preference for consistency nor money-pumping arguments seem like the whole picture). You might be talking about the second piece in the two-post series instead?
Brian Tomasik considers more selection toward animals with faster life histories in his piece on the effects of climate change on wild animals. He seems to think it‘s not decisive (and ends up concluding that he’s basically 50–50 on the sign of the effects of climate change on overall animal suffering) for ~three reasons (paraphrasing Tomasik):
Some of the animals with slower life histories which get replaced are often carnivorous/omnivorous, which might mean climate change increases invertebrate populations.
Instability might also affect plants, which could lower net primary productivity and hence invertebrate populations.
Many of the “ultimate” life forms with fast life histories will be microorganisms that we don’t put much moral weight in.
I’d be curious for how you think the arguments in the above post should change Tomasik’s view, in light of these considerations.
I didn’t say they fell under the ethics of killing, I was using killing as an example of a generic rights violation under a plausible patient-centered deontological theory to illustrate the difference between “a rights violation happening to one person and help coming for a separate person as an offset” and “one’s harm being directly offset.”
(I agree that it seems a bit more unclear if potential people can have rights, even if they can have moral consideration, and in particular rights to not be brought into existence, but I think it’s very plausible.)
Note, however, that I think the question of whether there can be deontic side-constraints regarding our treatment of animals is unclear even conditioning on deontology. Many deontologist philosophers – like Huemer – are uncertain whether animals have “rights” (as a patient-centered deontologist would put it), even though they think (1) humans have rights and (2) animals are still deserving of moral consideration. Deontologists sometimes resort to something like “deontology for people, consequentialism for animals” (although some other deontologists, like Nozick, thought that this was insufficient for animals).
I think offsetting emissions and offsetting meat consumption are comparable under utilitarianism, but much less comparable under most deontological moral theories, if you think animals have rights. For instance, if you killed someone and donated $5,000 to the Malaria Consortium, that seems worse – from a deontological perspective – than if you just did nothing at all, because the life you kill and the life you save are different people, and many deontological theories are built on the “separateness of persons.” In contrast, if you offset your CO2 emissions, you’re offsetting your effect on warming, so you don’t kill anyone to begin with (because it’s not like your CO2 emissions cause warming that hurts agent A, and then your offset reduces temperatures to benefit agent B). It might be similarly problematic to offset your contribution to air pollution, though, because the effects of air pollution happen near the place where the pollution actually happened.
Why do you think excruciating pain is 10k as intense as disabling pain? If I use these conversion factors (p. 30) instead, chicken welfare campaigns seem to win.
[Question] Where would you donate $100 to animal welfare?
Against the “debt-trap diplomacy” narrative
Do you think there are promising ways to slow down growth in aquaculture?
This post by Carl Shulman is very similar to this, I think.
[Question] What are the best defenses of human lives being worth living?
Somewhat relevant (takes the hard proves-too-much stance): https://www.econlib.org/archives/2014/10/dear_identity_p.html
She co-authored a piece a few months back about finding AI safety emotionally compelling. I’d be interested in her thoughts on the following two questions related to that!
How worried should we be about suspicious convergence between AI safety being one of the most interesting/emotionally compelling questions to think about and it being the most pressing problem? There used to be a lot of discussion around 2015 about how it seemed like people were working on AI safety because it’s really fun and interesting to think about, rather than because it’s actually that pressing. I think that argument is pretty clearly false, but I’d be curious how she views this post as interacting with those concerns.
It seems a bit like the post doesn’t draw a clean distinction between capabilities and safety. I agree that, to some extent, they’re inseparable (the people building transformative AI should care about making it safe), but how does she view the downside risks of, e.g., some of the most compelling parts of AI work being capabilities-related? More generally, how worried should we be, as a community, about how interconnected safety and capabilities work are?
Somewhat related: As Patrick Collison puts it, people working on making more effective engineered viruses aren’t high-status among people working on pandemic prevention, so why are capabilities researchers high-status among safety researchers?
(I have a decent sense of different answers within the community – this is not really a top concern of mine – but I’d nonetheless be interested in her take! My sense is that (1) the distinction isn’t nearly as clean since you want to build AI and make it go safely and (2) it’s good for capabilities work to be more safety-geared than the counterfactual.)
Thanks for writing this up! I disagree for a few reasons:
This feels more like a problem at the point between “alternative proteins have scaled up and we’ve replaced a bunch of meat” and “this results in a meat ban.” It seems possible to me that moral advocacy efforts can happen after alternative proteins have scaled up, but before there are laws to stop factory farming for food entirely. I don’t think alternative proteins replacing, say, 80% of meat will result in people thinking non-meat uses of animals is morally okay in a lock-in kind of way.
I think a lot of people’s moral reasoning about animals is posthoc/based on cognitive dissonance. That is, people like eating meat, or it’s a valuable part of their culture, and their moral intuitions around animal exploitation are built around that. So it seems plausible to me that moral advocacy efforts become substantially more effective if we’re able to quickly replace one of the biggest uses of animals.
I’m not sure I’m compelled by the mechanism for lock-in. One mechanism appears to be overconfidence/complacency as a society, which reduces the drive toward moral progress. This seems somewhat plausible, but it feels like this is possible to solve (for instance, animal advocacy organizations pivot toward other uses of animals, and are more able to dedicate resources focused on animal advocacy). Another mechanism seems to be that “letting automobiles replace horses as practical transport instead of listening to the horse advocates and becoming better humans, humanity has lost a great opportunity to do something for the animals for moral reasons, and do so by accepting an economic loss.” But I guess I’m not sure why – in either the horse case or the factory farming case – this is a unique opportunity. I don’t think the existence of factory farming necessarily strengthens the argument, to an average person, about the urgency of animal advocacy, because if people don’t buy the moral reasoning for caring about animals, I’m not sure the scale of suffering that exists currently affects whether they buy the moral reasoning. So in the case of horses, for example, I don’t think it was easier to convince people that horses matter before they were replaced as practical transport.
I feel like this is just intractable. Meat has the advantage of being embedded in culture and identity for generations. Without proposing any alternative, and going entirely through the moral route, means going up against this generational idea that eating meat is okay. Success seems hard. I’m wary of taking such a risk, when there’s also the possibility of factory farming for food persisting into the future (and I’d guess, in business-as-usual scenarios, it remains a bigger problem than other kinds of factory farming). I will also say I’m not convinced that expanding our moral circle to animals helps expand our moral circle to things like digital minds in the far future, though that’s a conversation for another day.
I’m uncomfortable about this argument for nonconsequentialist reasons. If factory farming is a grave injustice that ought be abolished (even if you’re a consequentialist who buys moral uncertainty), it seems like letting it stay for much longer and taking a huge risk that it stays forever because you want to do it for the right reasons could be a massive negligent injustice in itself. It feels like, in a moral way, saying “it’s bad to hire more beat cops to deter crime, because deterring crime through fear doesn’t convince anyone that their crime is wrong.” One reason a lot of people would find that intuitively bad is because it feels like it’s instrumentalizing the victims of crime for a dubious future consequence.
Thanks for this! You may also find this post of interest.
Benjamin Todd makes some similar points here.
Greg Mankiw’s introductory econ textbook has a good explanation of a similar point:
LeBron James is a great athlete. One of the best basketball players of all time, he can jump higher and shoot better than most other people. Most likely, he is talented at other physical activities as well. For example, let’s imagine that LeBron can mow his lawn faster than anyone else. But just because he can mow his lawn fast, does this mean he should?
Let’s say that LeBron can mow his lawn in 2 hours. In those same 2 hours, he could film a television commercial and earn $30,000. By contrast, Kaitlyn, the girl next door, can mow LeBron’s lawn in 4 hours. In those same 4 hours, Kaitlyn could work at McDonald’s and earn $50.
In this example, LeBron has an absolute advantage in mowing lawns because he can do the work with a lower input of time. Yet because LeBron’s opportunity cost of mowing the lawn is $30,000 and Kaitlyn’s opportunity cost is only $50, Kaitlyn has a comparative advantage in mowing lawns.
(From Mankiw, G., Principles of Economics, p. 54, 9th edition)
Suppose we modify this example, such that:
LeBron was the best in the world at mowing lawns.
LeBron doesn’t make more money from television commercials than any other celebrity in the world.
Even though LeBron is better at mowing lawns than at television commercials, and also ranks higher among those who mow lawns than among those who film television commercials, he should film the commercial.
I think the expected value of the long-term future, in the “business as usual” scenario, is positive. In particular, I anticipate that advanced/transformative artificial intelligence drives technological innovation to solve a lot of world problems (e.g., helping create cell-based meat eventually), and I also think a decent amount of this EV is contained in futures with digital minds and/or space colonization (even though I’d guess it’s unlikely we get to that sort of world). However, I’m very uncertain about these futures—they could just as easily contain a large amount of suffering. And if we don’t get to those futures, I’m worried about wild animal suffering being high in the meantime. Separately, I’m not sure addressing a lot of s-risk scenarios right now is particularly tractable (nor, more imminently, does wild animal suffering seem awfully tractable to me).
Probably the biggest reason I’m so close to the center is I think a significant amount of existential risk from AI looks like disempowering humanity without killing literally every human, and hence, I view AI alignment work as at least partially serving the goal of “increasing the value of futures where we survive.”