Currently working as a Community Associate at Center on Long-Term Risk, and as an independent s-risk researcher. Former scholar at SERI MATS 4.1 (multipolar stream), former summer research fellow at Center on Long-Term Risk, former intern at Center for Reducing Suffering, Wild Animal Initiative, Animal Charity Evaluators. Former co-head of Haverford Effective Altruism.
Research interests: • AI alignment • animal advocacy from a longtermist perspective • acausal interactions • artificial sentience • commitment races • s-risks • updatelessness
Feel free to contact me for whatever reason! You can set up a meeting with me here.
James Faville
Announcing the CLR Foundations Course and CLR S-Risk Seminars
I’ve noticed in my work that some people assume that “moral circle expansion” is a benefit of some animal advocacy campaigns (e.g. fish welfare) and not others (e.g. dog welfare).
I think the main difference is fish are not considered worthy of significant moral concern by most people, who view them more as living objects. With companion animal species, at least in many communities it is understood that their interests are very important. This doesn’t prevent there from being serious welfare concerns involving them, but I think these are usually more a symptom of insufficient awareness and action to address those concerns, rather than a denial those concerns are valid. So if we fix people’s current valuation of animals’ interests but consider a world in which we are much better able to put our values into effect (as may be the case in some futures), then companion animal species but not fish would hopefully be fairly well off. Therefore, if values might be locked-in at the level of which species matter, it seems important that we act to extend concern to as many species as possible (ignoring backfire risk).
Caveats to the above:In some communities it is less common to view typical companion animal species as worthy of significant moral concern.
Some campaigns to improve fish welfare might not result in significantly more moral concern for them. Probably depends on the exact target and conduct of the campaign.
If we’re not expecting even a “soft lock-in” at the level of which species matter (where “soft lock-in” = it becomes way less tractable but not impossible to spread concern for neglected species in the future), then maybe this work is not time-sensitive.
If instead a lock-in occurs at the level of more abstract values, we might prefer to spread a value of “care about all of sentient life”, and as long as evaluations of sentience are not extremely value-laden this may be enough to ensure good worlds in situations where human values dominate the future. Then spreading concern for fish is important mostly insofar as it reinforces this abstract value of sentientism. Maybe advocacy for companion animal species can further sentientist values as well.
As you point out there is also the potential for secondary transfer effects where expanding concern to one additional species/type of entity increases concern for others. My impression is that the significance of this effect as regards nonhumans is debatable, but it’s been studied a little in the psychology literature (maybe see this review).
That said, I probably prioritise companion animal welfare more than most EAs! Relative to farmed animals, I think humanity might have slightly more of a deontological duty to companion animals; we have higher confidence that companion animal species are sentient in most cases; and advocacy for companion animal species seems less likely to backfire. I also care about it more from a partial perspective. Given the current distribution of resources in animal advocacy, I’d rather marginal resources go to farmed/wild animals unless there’s a particularly good opportunity to help companion animal species, but I think I endorse some level of disproportionality in spending (but a good deal less than the current level of disproportionately we see).
The hidden complexity of “moral circles”
Thanks for the post—I’ve encountered this “consciousness must arise from an analog substrate” view before in places like this conversation with Magnus Vinding and David Pearce, and am interested in understanding it better.
I don’t think I really follow the argument for this view, but even granting that consciousness requires an analog substrate, would that change our priorities? It seems as though those who want to create artificial sentience (including conscious uploads of human minds) would just use analog computers instead. I suppose if you’re imagining a future in which artificial sentience might arise and have transformative effects before other forms of transformative AI, this consideration could be important as it would take time and perhaps be intractable for now to develop analog computers competitive with digital ones.
But assuming high-fidelity digital mind emulations are essentially p-zombies and aren’t behaviourally distinguishable from conscious minds, I think there’s only a few ways your argument would only have strategic relevance for us, none of which seem super compelling to me.
It could be that we should expect people to be comfortable being uploaded as digital minds in worlds where digital minds are in fact conscious, but not comfortable with this otherwise. I don’t think the public is good enough at philosophy of mind that this would hold!
We could be concerned that the first few uploads created before we develop a better understanding of consciousness were made on a mistaken assumption they would have subjective experience, and are not having the (hopefully happy) lives we wanted them to have, but this seems pretty low-stakes to me.
There might be path-dependencies in what human-originating civilization winds up valuing, and unless we adopt the view that consciousness requires analog substrates ahead of creating supposedly-conscious digital minds, we are at greater risk for ending up with the “wrong” moral valuation.
Maybe it is important for us to have a very clear understanding of consciousness, and this is a key component of that. (But I would be wary about backfire risk: I expect in the current moment advancing our understanding of consciousness is slightly negative for the reasons discussed here.)
I agree that veg*n retention is important, thanks for writing this up!
Another reason for concern here is that ex-veg*ns might be a significant source of opposition to animal advocacy, because they are motivated to express a sense of disillusionment/betrayal (e.g. see https://www.reddit.com/r/exvegans/) and because their stories can provide powerful support to other opponents of animal advocacy.
Note that the Faunalytics study finds that a decent number (37%) of ex-vegetarians are interested in trying again in the future, which bodes well for future outreach to them and mitigates my concern above a little bit.
There’s another very large disadvantage to speeding up research here—once we have digital minds, it might be fairly trivial for bad actors to create many instances of minds in states of extreme suffering (for reasons such as sadism). This seems like a dominant consideration to me, to the extent that I’d support any promising non-confrontational efforts to slow down research into WBE, despite the benefits to individuals that would come about from achieving digital immortality.
I also think digital people (especially those whose cognition is deliberately modified from that of baseline humans, to e.g. increase “power”) are likely to act in unpredictable ways—because of errors in the emulation process, or the very different environment they find themselves in relative to biological humans. So digital people could actually be less trustworthy than biological people, at least in the earlier stages of their deployment.
Another potential application of an urban design background is in wild animal welfare: some aspects of city planning might predictably affect the number of urban wild animals living there and their quality of life.
I have some draft reports on this matter (one on longtermist animal advocacy and one on work to help artificial sentience) written during two internships I did, which I can share with anyone doing relevant work. I really ought to finish editing those and post them soon! In the meantime here are some takeaways—apologies in advance for listing these out without the supporting argumentation, but I felt it would probably be helpful on net to do so.
Astronomically many animals could experience tremendous suffering far into the future on farms, in the wild, and in simulations.
Achieving a near-universal and robust moral circle expansion to (nonhuman) animals seems important to protecting them in the long term. This is reminiscent of the abolitionist perspective held by many animal advocates; however, interventions which achieve welfare improvements in the near-term could still have large long-term effects.
But moral circle expansion work can have direct and indirect negative consequences, which we should be careful to minimize.
It may become easier or harder to expand humanity’s moral circle to nonhuman animals in the future. Since this could have strategic implications for the animal advocacy movement, further research to clarify the likelihood of relevant scenarios could be very important.
I think it’s more likely to become harder, mostly because strong forms of value lock-in seem very plausible to me.
Small groups of speciesist holdouts could cause astronomical suffering over large enough time-scales, which suggests preserving our ability to spread concern for animals throughout all segments of humanity could be a priority for animal advocates.
Ending factory farming would also ameliorate biorisk, climate change, and food supply instability, all of which could contribute to an irrecoverable civilization collapse.
Preventing the creation of artificial sentiences until society is able to ensure they are free from significant suffering appears very beneficial if attainable, as developing artificial sentience at the present time both appears likely to lead to substantial suffering in the near-term, and also could bring about a condition of harmful exploitation of artificial sentience that persists into the far future.
Moral advocacy for artificial sentience can itself have harmful consequences (as can advocacy for nonhuman animals to a somewhat lesser extent). Nevertheless, there are some interventions to help artificial sentience which seem probably net-positive to me right now, though I’d be eager to see what others think about their pros/cons.
In terms of who is doing relevant work, I consider Center for Reducing Suffering, Sentience Institute, Wild Animal Initiative, and Animal Ethics to be especially relevant. But I do think most effective animal advocacy organisations that are near-term oriented are doing work that is helpful in the long-term as well—especially any which are positively influencing attitudes towards alternative foods or expanding the reach of the animal movement to neglected regions/groups without creating significant backlash. Same goes for meta-EAA organisations like ACE or Animal Advocacy Careers.
Tobias Baumann’s recent post How the animal movement can do even more good is quite relevant here, as well as an earlier one Longtermism and animal advocacy.
I also am very pleased that I’m the third James to respond so far out of four commenters :)
Pablo Stafforini has a great bibliography of articles on wild animal welfare that includes some earlier work coming from outside the EA space.
Thanks for checking—it’s not, as the CRS S-risk Introductory Fellowship doesn’t go into sufficient detail on some of the risks that CLR prioritises. I’ve added this to the seminar EOI form now.
I think the CRS S-risk Introductory Fellowship and CLR Foundations Course are pretty complementary. We’re taking a more targeted / object-level approach of mostly discussing a few specific risks CLR prioritises. We won’t spend significant time on the broader overview of s-risks and reasons for prioritising them that the CRS fellowship focuses on.