AGI x Animal Welfare: A High-EV Outreach Opportunity?
Epistemic status: Very quickly written, on a thought I’ve been holding for a year and that I haven’t read elsewhere.
I believe that within this decade, there could be AGIs (Artificial General Intelligences) powerful enough that the values they pursue might have a value lock-in effect, at least partially. This means they could have a long-lasting impact on the future values and trajectory of our civilization (assuming we survive).
This brief post aims to share the idea that if your primary focus and concern is animal welfare (or digital sentience), you may want to consider engaging in targeted outreach on those topics towards those who will most likely shape the values of the first AGIs. This group likely includes executives and employees in top AGI labs (e.g. OpenAI, DeepMind, Anthropic), the broader US tech community, as well as policymakers from major countries.
Due to the risk of lock-in effects, I believe that the values of relatively small groups of individuals like the ones I mentioned (less than 3 000 people in top AGI labs) might have a disproportionately large impact on AGI, and consequently, on the future values and trajectory of our civilization. My impression is that, generally speaking, these people currently
a) don’t prioritize animal welfare significantly
b) don’t show substantial concern for digital minds sentience.
Hence if you believe those things are very important (which I do believe), and you think that AGI might come in the next few decades[1] (which a majority of people in the field believe), you might want to consider this intervention.
Feel free to reach out if you want to chat more about this, either here or via my contact you can find here.
- ^
Even more so if you believe, as I do along with many software engineers in top AGI labs, that it could happen this decade.
- Bringing about animal-inclusive AI by 18 Dec 2023 11:49 UTC; 117 points) (
- A selection of some writings and considerations on the cause of artificial sentience by 10 Aug 2023 18:23 UTC; 48 points) (
- Should some people start working to influence the people who are most likely to shape the values of the first AGIs, so that they take into account the interests of wild and farmed animals and sentient digital minds? by 31 Aug 2023 12:08 UTC; 16 points) (
Strongly agree that if lock-in happens, it will be very important for those controlling the AIs to care about all sentient beings. My impression of top AGI researchers is that most take AI sentience pretty seriously as a possibility, and it seems hard for someone to think this without also believing animals can be sentient.
Obviously this is less true the further you get from AI safety/OpenAI/DeepMind/Anthropic. An important question is, if AGI happens and the control problem is solved, who ends up deciding what the AGI values?
I’m pretty uncomfortable with the idea of random computer scientists, tech moguls, or politicians having all the power. Seems like the ideal to aim for is a democratic process structured to represent the reflective interests of all sentient beings. But this would be extremely difficult to do in practice. Realistically I expect a messy power struggle between various interest groups. In that case, outreach to leaders of all the interest groups to protect nonhuman minds is crucial, as you suggest.
I wrote some related thoughts here, curious what you think.
I am not saying this is common, but it is alarming that Eliezer Yudkowsky, a pretty prominent figure in the space, thinks that AI sentience is possible but nonhuman animals are not sentient.
Agreed, it’s a pretty bizarre take. I’d be curious whether his views have changed since he wrote that FB post
Also, Holden Karnofsky (not so confidently) think that humans matter astronomically more than nonhuman animals. At the same time he thinks that digital people is possible.
This is indeed a good idea (although it isn’t that clear to me “how to do targeted outreach to people there” woud work, but I havent done targeted outreach before)
A future where the current situation would continue, but with AI making us more powerful, would in all likelihood be a very bad one if we are to include farmed animals (it gets more complicated if you include wild animals).
See the following relevant articles:
Optimistic longtermism would be terrible for animals:
If we don’t end factory farming soon it might be there forever :
For me, it sounds likely that the “expected value” of the future depends mostly on what happens to farmed and wild animals. See the Moral Weight project : “Given hedonism and conditional on sentience, we think (credence: 0.65) that the welfare ranges of humans and the vertebrate animals of interest are within an order of magnitude of one another”.
Why the expected numbers of farmed animals in the far future might be huge:
Thanks for writing this! I have been meaning to write something about why I think digital sentience should potentially be prioritized more highly in EA; in lieu of that post, here’s a quick pitch:
One of EA’s comparative advantages seems to have been “taking ideas seriously.” Many of the core ideas from EA came from other fields (economics, philosophy, etc.); the unusual aspect of EA is that we didn’t take invertebrate welfare or Famine, Affluence, and Morality as intellectual thought experiments but instead serious issues.
It seems possible to me that digital welfare work will, by default, exist as an intellectual curiosity. My sample of AI engineers is skewed, but my sense is most of them will be happy to discuss digital sentience for a couple hours, but are unlikely to focus on it heavily.
Going from “that does seem like a potentially big problem, someone should look into that” to “I’m going to look into that” is a thing that EA’s are sometimes good at doing.
On (2): I agree most are unlikely to focus on it heavily, but convincing some people at top labs to care at least slightly seems like it could have a big effect in making sure at least a little animal welfare and digital minds content is included in whatever they train AIs to aim towards. Even a small amount of empathy and open-mindedness for what could be capable of suffering should do a lot for the risk of astronomical suffering.
I’m not too confident that AGI’s would be prone to value lock in. Possibly I am optimistic about ai, but ai already seems quite good at working through ethical dilemmas and acknowledging that there is nuance and conflicting views on morals. It would seem like quite the blunder to simply regard the morals of those closest to them as the ones of most importance.
But AIs could value anything. They don’t have to value some metric of importance that lines up with what we care about on reflection. That is, it wouldn’t be a blunder in an epistemic sense. AIs could know their values lack nuance and go against human values, and just not care.
Or maybe you’re just saying that, with the path we’re currently on, it looks like powerful AIs will in fact end up with nuanced values in line with humanity’s. I think this could still constitute a value lock-in, though, just not one that you consider bad. And I expect there would still be value disagreements between humans even if we had perfect information, so I’m skeptical we could ever instill values into AIs that everyone is happy about it.
I’m also not sure AI would cause a value lock-in, but more because powerful AIs may be widely distributed such that no single AGI takes over everything.
Interesting, I wonder if AGI will have a process for deciding it’s values(like a constitution). But then the question is how it decides on what that process is(if there is one).
I thought there might be a connection between having a nuanced process for an agi to pick it’s values and problem solving ability(ex. How to end the world), such that having the ability to end the world must mean that they have a good ability to work through nuance on their values and think it may not be valuable. Possibly this connection might not always exist in which case, epic sussyness may occur
Yeah, there might be a correlation in practice, but I think intelligent agents could have basically any random values. There are no fundamentally incorrect values, just some values that we don’t like or that you’d say lack importance nuance. Even under moral realism, intelligent systems don’t necessarily have to care about the moral truth (even if they’re smart enough to figure out what the moral truth is). Cf. the orthogonality thesis.
I mean, I agree that it has nuance but it’s still trained on a set of values that are pretty much current western people values, so it will probably put more or less emphasis on various values according to the weight western people give to each of those.
Not too sure how important values in data sets would be. Possibly AGI’s may be created different than current LLMs in simply not needing a dataset to be trained from
this idea sounds good and your website looks great (best of luck with your projects! :)
Thanks for sharing, Simeon!
I guess part of the lack of concern for artificial sentience is explained by people at top labs focussing on aligning AGI with human values, rather than impartial value (relatedly). Ensuring that AI systems are happy seems like a good strategy to increase impartial value. It would lead to good outcomes even in scenarios where humans become disempowered. Actually, the higher the chance of humans becoming disempowered, the more pressing is artificial sentience? I suppose it would make sense for death with dignity strategies to address this (some discussion).
Peter Singer and Tse Yip Fai were doing some work on animal welfare relating to AI last year: https://link.springer.com/article/10.1007/s43681-022-00187-z It looks like Fai at least is still working in this area. But I’m not sure whether they have considered or initiated outreach to AGI labs, that seems like a great idea.