Contractor RA to Peter Singer, Princeton
Fai
Why the expected numbers of farmed animals in the far future might be huge
Could it be a (bad) lock-in to replace factory farming with alternative protein?
AI ethics: the case for including animals (my first published paper, Peter Singer’s first on AI)
Philosophers speaking against the mistreatment of animals: The Montreal Declaration On Animal Exploitation
Thank you for the post! I have a question: I wonder whether you think the trajectory to be shaped should be that of all sentient beings, instead of just humanity? It seems to me that you think that we ought to care about the wellbeing of all sentient beings. Why isn’t this prinicple extraopolated when it comes to longtermism?
For instance, from the quote below from the essay, it seems to me that your proposal’s scope doesn’t neccessarily include nonhuman animals. “For example, a permanent improvement to the wellbeing of animals on earth would behave like a gain (though it would require an adjustment to what v(⋅) is supposed to be representing).”
But what about the impact on the topic itself? Having the discussion heavily directed to a largely irrelevant topic, and affecting its down/upvoting situation, doesn’t do the original topic justice. And this topic could potentially be very important for the long-term future.
Re: Some thoughts on vegetarianism and veganism
Preventing factory farming from spreading beyond the earth
Space governance, moral circle expansion (yes I am also proposing a new area of interest.)
Early space advocates such as Gerard O’Neill and Thomas Heppenheimer had both included animal husbandry in their designs of space colonies. In our time, the European Space Agency, the Canadian Space Agency, the Beijing University of Aeronautics and Astronautics, and NASA, have all expressed interests or announced projects to employ fish or insect farming in space.
This, if successful, might multiply the suffering of farmed animals by many times of the numbers of farmed animals on earth currently, spanned across the long-term future. Research is needed in areas like:
Continuous tracking of the scientific research on transporting and raising animals in space colonies or other planets.
Tracking, or even conducting research on the feasibility of cultivating meat in space.
Tracking the development and implementation of AI in factory farming, which might enable unmanned factory farms and therefore make space factory farming more feasible. For instance, the aquaculture industry is hoping that AI can help them overcome major difficulties in offshore aquaculture. (This is part of my work)
How likely alternative proteins like plant-based meat, cultivated meat, are to substitute all types of factory farming, including fish and insect farming.
The timelines of alternative proteins, particularly cultivated meat . We are particularly interested in its comparison with space colonization timelines, or in other words, whether alternative proteins will succeed before major efforts of space colonization.
Philosophical work on the ethics of space governance, in relation to nonhuman animals.
(note: I am actually writing a blogpost on factory farming in space/in the long-term future, stay tuned or write a message to me if you are interested)
(update: I posted it: https://forum.effectivealtruism.org/posts/bfdc3MpsYEfDdvgtP/why-the-expected-numbers-of-farmed-animals-in-the-far-future)
I think that’s a strong reason for people other than Jacy to work on this topic.
Watching the dynamic here I suspect this might likely be true. But I would still like to point out that there should be a norm about how these situations should be handled. This likely won’t be the last EA forum post that goes this way.
To be honest I am deeply disappointed and very worried that this post has gone this way. I admit that I might be feeling so because I am very sympathetic to the key views described in this post. But I think one might be able to imagine how they feel if certain monumental posts that are crucial to the causes/worldviews they care dearly about, went this way.
Thank you for the post!
I have long suspected that EA organizations in other cause areas have been put to higher standards of evaluation while getting funding (I am mainly referring to EA ones, but not only) than AI safety. I think I have slightly updated upward on the likeliness of this view being right after reading this post.
More information on the comparison I am suspecting and updating, using EA animal welfare organizations as example as I had some experience in this cause area. My suspicion is that, relative to AI safety grants animal welfare organizations receive much more scrutiny on their track records, experience of staff, work culture, etc.
Also, my observation is that in animal welfare organizations efforts to try to pay more sustainable and competitive salaries (from what are quite low levels and huge relative pay-cuts) to staff is not particularly welcome by all donors. (to be fair to the donors, some EA animal welfare organizations paying very low salaries is due to their management who refuse to pay higher). I am therefore puzzled why this kind of pressure doesn’t seem to exist as much in some other EA cause areas (and why it has to exist, in its current extent, in EA animal welfare). Granted, an underlying reason AI safety organizations pay high salaries is because the salaries people who can work in AI safety organizations can get in for profits are high(er) and they are already having huge pay-cuts to work in non-profit AI safety organizations. But it does seem to me judging from the salary levels said in this post Redwood might be experiencing much less pressure to suppress salary levels, comparatively. Also notice that they also earn significantly more than their peers who work in academia, which is something that isn’t generally seen in EA animal welfare.
I think I am not the only one having this kind of suspicion. At least 5 people from EA animal welfare have expressed to me their concerns, even complaints, that non-longtermist organizations are being treated unfairly relative to longtermist organizations, especially AI safety ones. According to my observation and I hope I am wrong, there seems to be some anti-longtermism/anti—AI safety sentiment flowing around in the animal welfare cause. I think this might be causing some community building problems within EA and maybe worth addressing. (Fwiw I endorse some form of longtermism and I see a connection between animal welfare and longtermism. I now work on AI’s impact on animals)
I think this topic is more relevant than the original one.
Relevant with respect to what? For me, the most sensible standard to use here seems to be “whether it is relevant to the original topic of the post (the thesis being brought up, or its antithesis)”. Yes, the topic of personal behavior is relevant to EA’s stability and therefore how much good we can do, or even the long-term future. But considering that there are other ways of letting people know what is being communicated here, such as starting a new post, I don’t think we should use this criterion of relevance.
Ideas, however important to the long-term future, can surface more than once.
That’s true, logically speaking. But that’s also logically true for EA/EA-like communities. In other words, it’s always “possible” that if this EA breaks, there “could be” another similar one that will be formed again. But I am guessing not many people would like to take the bet based on the “come again argument”. Then what is our reason for being willing to take a similar bet with this potentially important—I believe crucial—topic (or just any topic)?
And again, the fact that there are other ways to bring up the topic of personal behavior makes it even less reasonable to use this argument as a justification here. In other words, there seem to be way better alternatives to “reduce X-risk to EA” than commenting patterns like it’s happening here, that might risk “forcing a topic away from the surface”.
And we cannot say that if something “can surface more than once”, then we should expect it to also “surface before it is too late”, or “surface with the same influence”. Timing matters, and so do the “comment sections” of all historical discussions on a topic.
There are also some even more “down-to-earth” issues, such as the future writers on this topic experiencing difficulties of many sorts. For example, seeing this post went this way, should the writer of a next similar post (TBH, I have long thought of writing a similar post to this) just pretend that this post doesn’t exist? This seems to be bad intellectual practice. But if they do cite this post, readers will see the comment section here, and one might worry that readers will be affected. More specifically, what if Jacy got this post exactly spot on? Should people who hold exactly the same view just pretend this post doesn’t exist and post almost exactly the same thing?
I haven’t voted on the post either way despite agreeing that the writer should probably not be here.
I am glad you tried to be fair to the topic. But just like to point out that “not voting either way” isn’t absolute proof that you haven’t been affected—you could have voted positively if not for the extra discussion.
I don’t know about anyone else, but I suspect the average person here is even less prone than me to downvote for reasons unrelated to content.
I have to say I am much more pessimistic than you on this. I think it’s psychologically quite natural that with such comments in the comment section, one might find it hard to concentrate through such a long piece, especially if one takes a stance against the writers’ behavior.
I am mindful of the fact that I am contributing to what I am suspecting to be bad practice here, so I am not going to comment on this direction further than this.
There are a ton of judgement calls in coming up with moral weights.I’m worried about a dynamic where the people most interested in getting deep into these questions are people who already intuitively care pretty strongly about animals, and so the best weights available end up pretty biased
I agree there’s such a problem. But I think it is important to also point out that there is the same problem for people who tend to think they “do not make judgement calls about moral weights”, but have nonetheless effectively came up with their own judgement calls when they live their daily lives which “by the way” affect animals (eat animals, live in buildings that require constructions that kill millions of animals, gardening, which harms and give rise to many animals, etc).
Also, I think it is equally, maybe more, important to recognize those people who make such judgement calls without explicitly thinking about moral weights, let alone go into tedious research projects, are people who intuitively care pretty little about animals, and so their “effective intuition about moral weights” (intuitive because they didn’t want to use research to back it up) backing up their actions end up pretty biased.
I think I intuitively worry about the bias of those who do not particularly feel strongly about animals’ suffering (even those caused by them), than the bias of those who care pretty strongly about animals. And of course, disclaimer: I think I lie within the latter group.
I just don’t see any reason why thousands of years of cultural practice would not generate a behavior with such obvious and immediate benefits.
I disagree with this. I know of quite a lot of examples of people not using clearly beneficial methods.
One case study I have done quite extensive research on is the slaughter of a fish called pond loach, commonly consumed in China, Japan, and Korea. They are small and slimy and therefore hard to grab and handle. In most of Korea and many parts of China, pond loaches are put in buckets and then sprinkled with salt which kills or immobilizes (this method sometimes doesn’t kill all of them immediately) them by osmotic dehydration, and also deslime them a bit. This makes salt a very effective way of slaughtering pond loaches as it makes them easy to grab and handle. Another added benefit of using salt is that it is always needed in the dish anyway. But for some reasons, people in some parts of China and Japan are using some other much more dangerous and time-consuming ways of killing pond loaches. (DISCLAIMER: I am not claiming that people should use salt to kill pond loaches. In fact, I think it is one of the worst slaughters of animals in the world, and I am working on eliminating this practice.)
Another example is my experience working as a production manager in a garment factory. It took me less than 15 minutes to figure out that one of their processes can be done in a different way that saves >30% labor time, and it is literally as easy as holding a component backwards. They changed to my method and never went back (PM me if you are interested in the full details). My boss and all the previous production managers have huge incentives to optimize everything in the production line—I mean they are a business, in an extremely competitive environment! But no they didn’t figure this one out until I joined.
Wild animal suffering in space
Space governance, moral circle expansion.
Terraforming other planets might cause animals to come to exist in these planets, either because of intentional or unintentional behaviors. These animals might live net negative lives.
Also, we cannot rule out the possibility that there are already wild “animals” (or any form of sentient beings) who might be suffering from net negative lives in other planets. (this does not relate directly to the Fermi Paradox, which is highly intelligent lives, not lives per se)
Relevant research include:
Whether wild animals lead net negative or positive lives on earth, under what conditions. And whether this might hold the same in different planets.
Tracking, or even doing research on using AI and robotics to monitor and intervene with habitats. This might be critical if there planets there are planets that has wild “animals”, but are uninhabitable for humans to stay close and monitor (or even intervene with) the welfare of these animals.
Communication strategies related to wild animal welfare, as it seem to tend to cause controversy, if not outrage.
Philosophical research, including population ethics, environmental ethics, comparing welfare/suffering between species, moral uncertainty, suffering-focused vs non-suffering focused ethics.
General philosophical work on the ethics of space governance, in relation to nonhuman animals.
Thank you for the post!
I have thought of one particularly biased terminology in longtermism a lot: “humanity” / “future people”. Why is it not “all sentient beings”? (I am guessing one big reason is strategic instead of conceptual)
I disagree here. Even though I think it’s more likely than not space factory farming won’t go on forever, it’s not impossible that it will stay, and the chance isn’t like vanishingly low. I wrote a post on it.
Also, for cause prioritization., we need to look at the expected values from the tail scenarios. Even if the chances could be as low as 0.5%, or 0.1%, the huge stake might mean the expected values could still be astronomical, which is what I argue for space factory farming. I think what we need to do is to prove why factory farming will go away in the near/mid future 100%, which I don’t see good arguments for.
For example, there is no proof that cellular agriculture is more energy and resource efficient than all kinds of factory farming. In fact, insect farming, and the raising of certain species of fish, are very efficient. Cellular agriculture also takes a lot of energy to go against entropy. This is especially true if the requirement for the alignment of protein structures is high. In terms of organizing things together against entropy, biological beings are actually quite efficient, and cellular agriculture might have a hard task to outperform all animal protein. There needs to be serious scientific research specifically addressing this issue, before we can claim that cellular agriculture will be more efficient in all possible ways.
On human becoming compassionate. I feel pessimistic about that, because here we are talking about moral circle expansion beyond our own species membership. Within species, whether it be women, people of color, elderly, children, LGBTQ, they all share very similar genes with dominant humans (which generally were white men, in history), neural structures (so that we can be sure that they suffer in similar ways), and we have shared natural languages. All these made it rather easy for dominant humans to understand dominated humans reasonable well. It won’t be the same for our treatment of nonhumans, such as nonhuman animals and digital minds without natural language capabilities.
But wouldn’t a new post on this topic serve the same purpose of expressing and discussing this concern, without having the effects of affecting this topic?
Thank you for posting, and thank you for doing the presentation. This is for me the best presentation on wild animal suffering addressing a vegan audience so far (with some other options being quite impressive already).
Strong upvote. I really admire this piece. Thank you for writing and posting it. I think literatures and arts played a significant (I don’t mean major, certainly not sole) role in the moral circle expansion up to the point of all humans, as it was a great way for people to understand the circumstances and hardships of people with strikingly different cultural, linguistic, social, and economic backgrounds from them. Maybe it could work for nonhuman animals too.
I really wish (and I believe very possible) there will one day be an AI that can automatically animate a script so that great scripts like this can gain yet another powerful way of conveying important messages. Yes, the animation is going to be horrific, but it may also save lives.
Also, I think it is beneficial that this kind of post occasionally appears on the forum.
Thank you for writing this! I have been thinking about some ideas that could become mega projects, just throwing some of them out here (you have already listed some of them)
Pay to install electric stunners in “small fish slaughter machines” which is popular in China. The idea is to pay way higher than the cost of installing such stunners so that the whole industry that produces this machine is disrupted. I am doing research on this potential project. My tentative judgement is that installing such stunners might be cheap—it could be as simple as connecting electricity between the gear wheels that push the fish into the machine. According to the producers’ claims, these machines can kill 100-15000 fish per hour, depending on the species. I estimate that a $100 payment per machine is huge enough to incentivise the majority of these machines’ producers to agree to a deal.
Invent a new “cleansing machine” for crayfish (I estimate that Chinese people eat 4,000B of them each year). The special thing about crayfish is that they are so dirty for human consumption they require heavy “cleansing” before being cooked. Shallow research shows that the currently most common method for “cleansing” them is ultrasonic bath while they are still fully alive (for about 20 minutes), which seems extremely painful if they are sentient (in this video you can see the crayfish crawling out to escape the ultrasonicing water.) The idea is to invent a new method that is much less painful and sell the machine at a price that would crowd out all other methods. Without virtually any serious research, it seems like electrical bath before the cleaning starts is the way to go.
AI For Animals (This could be the name of a new charity)
AI to monitor the welfare situation (not according to factory farms’ definitions) in factory farms, transportation, and retail spots. This is especially important for fish, because there are many species farmed
AI to speed up PB/CM research. A lot of PB/CM startups now have 1-2 computer scientist/data scientist. Instead of each of them hiring expensive CS/DS/ML researchers and doing uncollaborated work, start a research hub to distribute the new science and technology
AI to decipher animal “language”. This is something that a few academic teams are already working on, but with small project scales, and on animtals that are not farmed. This is a moonshot—for maybe the truth is that high level nonhuman animal-human communication is impossible. But if we can directly communicate with them, or at least know what they want to express, it will help us understand a lot more about nonhuman animals, and probably help advocacy (imagine a knowing a fish’s complaint or pleading!)
AI for tackling wild animal suffering. This is a big idea with potentially many sub-applications:
AI (drones) to identify dying wild animals and euthanize them. This approach even avoids the problem/accusation that we might cause unpredictable changes and therefore possibly harms while we change what happens in ecosystems, for example, if a deer felt off a cliff and painfully waits for the death, whether the death is cause by time, hunger, a predator, or a drone (as far as the killing doesn’t leave stuff like bullets or toxic chemicals) likely won’t affect what will happen inside that ecosystem. The same might even be said in some cases where the animals are not yet dead—for example, if a forest fire is definitely going kill certain animals and the AI deemed it impossible to move those animals out of the fire’s affecting zone, the AI drones can kill them before the fire reach them. Again in this case whether the animals are dead before the fire reaches them won’t affect the ecosystem, the only difference is only the amount of suffering that happens (burning alive is often thought to be one of the most painful deaths).
AI to track/count wild animals. Quite a number of teams are already doing this, but none of them are interested in the suffering/welfare of individual animals. If a team is interested in tackling WAS the animals to train the AI to track/count would be very different. Being able to count wild animals, monitor their movements, and changes in population, are all hugely important for research that guide wild animal welfare interventions.
AI to identify wild animal welfare levels. This is the wild animal version of the farmed animal welfare identification AI, but probably much harder as wild animals are not in controlled environments and data gathering and data curation will be much harder.
Computer models to predict the effects of interventions. There are many people working on this, but as far as my knowledge goes those people are mostly, if not entirely, interested in “classical conservation” which is not interested in the welfare of individual animals. We need models that can predict the change of net welfare in a system when certain interventions are introduced to a system.