I manage operations, research, publishing and grantwriting at Allied Scholars for Animal Protection, a startup nonprofit building a unified infrastructure for campus animal advocacy.
Philosophy and science fiction nerd.
I manage operations, research, publishing and grantwriting at Allied Scholars for Animal Protection, a startup nonprofit building a unified infrastructure for campus animal advocacy.
Philosophy and science fiction nerd.
Thanks for mentioning this! Looks like a great organization with a very similar mission to ours. Will keep them in mind and potentially reach out.
I think Vegan Hacktivists does some similar work. Perhaps someone in their orbit would be interested in working with you!
Thanks for writing this! Seems useful to have a term for excessive charitability. Being able to point at it succinctly might help mitigate information cascades.
Steelmanning is typically described as responding to the “strongest” version of an argument you can think of.
Recently, I heard someone describe it a slightly different way, as responding to the argument that you “agree with the most.”
I like this framing because it signals an extra layer of epistemic humility: I am not a perfect judge of what the best possible argument is for a claim. In fact, reasonable people often disagree on what constitutes a strong argument for a given claim.
This framing also helps avoid a tone of condescension that sometimes comes with steelmanning. I’ve been in a few conversations in which someone says they are “steelmanning” some claim X, but says it in a tone of voice that communicates two things:
The speaker thinks that X is crazy.
The speaker thinks that those who believe X need help coming up with a sane justification for X, because X-believers are either stupid or crazy.
It’s probably fine to have this tone of voice if you’re talking about flat earthers or young earth creationists, and are only “steelmanning” X as a silly intellectual exercise. But if you’re in a serious discussion, framing “steelmanning” as being about the argument you “agree with the most” rather than the “strongest” argument might help signal that you take the other side seriously.
Anyone have thoughts on this? Has this been discussed before?
Animal Visuals also has a graph of number of animals killed per million calories .
And there’s this article, with a table where you can mess around with the numbers
Discussions of the long-term future often leave me worrying that there is a tension between democratic decision-making and protecting the interests of all moral patients (e.g. animals). I imagine two possible outcomes:
Mainstream political coalitions make the decisions in their usual haphazard manner.
RISK: vast numbers of moral patients are ignored.
A small political cadre gains power and ensures that all moral patients are represented in decision-making.
RISK: the cadre lacks restraint and leaves its fingerprints on the future.
Neither of these is what we should want.
CLAIM: The most straightforward way to dissolve this tradeoff is to get the mainstream coalitions to care about all sentient beings before they make irreversible decisions.
How?
A major push to change public opinion on animal welfare. Conventional wisdom in EA is to prioritize corporate campaigns over veg outreach for cost effectiveness reasons. The tradeoff I’ve described here is a point in favor of large-scale outreach.
I don’t just mean 10x of your grandpa’s vegan leafletting. A megaproject-scale campaign would be an entirely different phenomenon.
A Long Reflection. Give society time to come to its senses on nonhuman sentience.
Of course, the importance of changing public opinion depends a lot on how hingey you think the future is, and tractability depends on how close you think we are to the hinge. But in general, I think this is an underrated point for moral circle expansion.
I wrote this quickly and am on the fence about turning it into a longer-form post.
Thanks for sharing this, I upvoted it. It’s cool to see efforts aimed at moving the overton window on nonhuman sentience. In general I feel positively about this article and have a lot of respect for your work.
One worry I have about this type of public communication is that it runs the risk of distracting people from the more glaring problem of factory farming.
Caring about pigs is already way outside the overton window. If we spill a lot of ink on really speculative claims in public-facing media, there’s a risk that people will conflate two very different phenomena:
An extreme moral catastrophe that we know is happening (factory farming)
An important but very speculative area of academic philosophy (microbe sentience etc.)
The former has a clear solution (eat plants), the latter might be completely intractable. The former involves lives that are almost certainly net-negative, the latter involves lives of unknown quality. The former is robustly terrible according to any sane worldview, the latter may hinge on population ethics and your approach to Pascal’s Mugging.
I think you could have better communicated this distinction, perhaps by having a paragraph early in the article that states in very clear terms how bad factory farming is.
Relatedly, there are a few parts of the article that try to communicate true and useful points but risk playing into misguided pro-meat tropes. Examples:
“If meat is murder, does that mean antibacterial soap is, too?”
Paragraph on plant sentience. The “plants have feelings” claim is actually an argument against eating meat, but most people don’t know this.
Separately, the hyperlink in this sentence seems to be incorrect:
And if they can suffer, as Jeff Sebo, a philosophy professor at NYU, argues in a prescient new paper, we should probably try to prevent that pain.
Strongly agree that if lock-in happens, it will be very important for those controlling the AIs to care about all sentient beings. My impression of top AGI researchers is that most take AI sentience pretty seriously as a possibility, and it seems hard for someone to think this without also believing animals can be sentient.
Obviously this is less true the further you get from AI safety/OpenAI/DeepMind/Anthropic. An important question is, if AGI happens and the control problem is solved, who ends up deciding what the AGI values?
I’m pretty uncomfortable with the idea of random computer scientists, tech moguls, or politicians having all the power. Seems like the ideal to aim for is a democratic process structured to represent the reflective interests of all sentient beings. But this would be extremely difficult to do in practice. Realistically I expect a messy power struggle between various interest groups. In that case, outreach to leaders of all the interest groups to protect nonhuman minds is crucial, as you suggest.
I wrote some related thoughts here, curious what you think.
Maybe I’m misunderstanding, but I think the article is referring to projections that meat production will double by 2050. He’s not claiming that climate change will be twice as bad without alt proteins. Instead, he’s saying that harms from meat production will double by 2050 without alt proteins.
1) Meat production is a significant contributor to climate change, other environmental harms (pretty much all of them), food insecurity, antibiotic resistance, and pandemic risk—causing significant and immediate harm to billions of people.
2) All of these harms are likely to double in adverse impact (or more) by 2050 unless alternative proteins succeed.
Seems accurate to me, though I see how it might be confusingly worded.
Agreed, it’s a pretty bizarre take. I’d be curious whether his views have changed since he wrote that FB post
Thanks so much for writing this!
I’m healthier and drastically stronger than before I went vegan. Very happy to talk to anyone about vegan sources of protein (or any other nutrient).
Animal and AI Consciousness in The Economist
A succinct discussion of the current state of our understanding of consciousness. I love seeing things like this in mainstream media.
Interestingly, there’s also a reference to AI risk at the end:
As to conscious AI, Yoshua Bengio of the University of Montreal, a pioneer of the modern deep-learning approach to AI, told the meeting he believes it might be possible to achieve consciousness in a machine using the global-workspace approach. He explained the advantages this might bring, including being able to generalise results with fewer data than the present generation of enormous models require. His fear, though, is that someone will build a self-preservation instinct into a conscious AI, which could result in its running out of control.
This paragraph probably leaves readers with two misconceptions.
The wording implies that Bengio’s main worry is deliberate coding of a self-preservation instinct, whereas the more prevalent concern is instrumental convergence.
Readers may conclude that consciousness is required for AI to be dangerous, which is not the case.
It also would have been nice for the article to mention the ethical implications for how we treat nonhuman minds, but that’s usually too much to ask for.
Perhaps someone with better credentials than me could write them a letter.
FWIW stimulus-response is far from the only evidence we have for insect sentience. Table 1 in Jason Schukraft’s Invertebrate Sentience overview discusses some of the other criteria. The belief that some insects are sentient is pretty respectable in the scientific community; for example, Scientific American published an article on the subject this month.
Obviously the field is pretty speculative and I’m not an expert, but IMO the fact that many experts do take insect sentience seriously means we should probably put non-negligible credence in it.
Overall though, thanks for writing this post! It’s an important point. I suspect many people, when faced with ethical arguments for veganism, decide not to care about animals at all simply because they aren’t willing (or in a rare cases unable) to go vegan. Classic example of failing with abandon.
Great questions!
Allied Scholars doesn’t run the dog meat website, that’s Molly Elwood. I’m not sure what kinds of metrics she has for that.
We haven’t collected rigorous data on engagement, but I’m very enthusiastic about people doing that kind of thing. There have been a few studies over the years suggesting that leafletting doesn’t really work (see ACE and Faunalytics), but I suspect there are lots of potential outreach tactics that work much better than leafletting. For example, Faunalytics found that showing people factory farming footage has a meaningful impact on behavior and attitudes around pork.
I’d love to see Faunalytics or other orgs study a broader variety of outreach tactics – I think there’s a risk that people see a study saying “leafletting doesn’t work” and conclude that vegan outreach in general is a lost cause. This is something I plan to write more about later this summer.
Anecdotally, when I’ve done standard vegan leafletting (with a sign that says “Why Aren’t You Vegan Yet?”) less than 1% of passerby engaged at all. With the dog meat stand, it felt more like 10%, though the number could easily be higher or lower and I’d have to actually keep track to know for sure.
There’s always a risk of pushing people away, but IMO this can mostly be mitigated if the organizers are nice to people in conversation.
I don’t think people who spoke to us were significantly turned off; the outreach volunteers weren’t overly aggressive or pushy.
People who walked past without engaging may have thought it was weird, but it’s hard to know what someone thinks if they don’t stop to talk.
Unfortunately, the paucity of social science research on vegan outreach makes it hard to know what works and what doesn’t. I imagine there’s research on outreach for other causes that could be relevant, but I haven’t looked into this.
By default I expect more engagement with pro-vegan arguments to be a good thing, and the dog meat stand got a lot more engagement than other outreach tactics I’ve tried.
Yeah!
Faunalytics did an RCT comparing showing people 2D factory farming footage to 3D VR. They found that both led to lower pork consumption than the control group, and that the VR was not a meaningful improvement over 2D screens.
Another Faunalytics study compared students’ self-reported willingness to reduce consumption after viewing different videos.
The Humane League did RCTs on showing people a documentary on health, environment, & animal welfare reasons for veganism and did not find a significant effect on meat consumption. Notably, in the conclusion the authors say that “Novel intervention strategies may be needed to meaningfully shift dietary consumption away from meat and animal products.”
On the other hand, THL also published a meta-analysis with encouraging implications for interventions that focus on animal welfare-related messaging.
THL lists a few other relevant studies here that I haven’t looked at. There were also the leafletting studies by ACE and Faunalytics, both of which found leafletting to be ineffective.
With a lot of these studies, reliance on self report means that there are problems like social desirability bias.
I’ve also been thinking of writing up some kind of review, feel free to DM me if you want to collaborate :)
Seems like a great project for social science majors & student groups to replicate or do some variation of at other universities!