I recently completed a PhD exploring the implications of wild animal suffering for environmental management. You can read my research here: https://scholar.google.ch/citations?user=9gSjtY4AAAAJ&hl=en&oi=ao
I am now considering options in AI ethics, governance, or the intersection of AI and animal welfare.
Tristan Katz
So does that mean you think it’s likely that we will spread to other planets without spreading ecosystems? If we spread ecosystems it seems likely that we would also spread at least some wild animals. And I think we have good reasons to do so—to promote good atmospheres and other ecosystem services.
I feel pretty skeptical that humans capable of going to other galaxies would not have realized the inefficiencies of meat and would still not have made competitive substitutes.
Most animals are wild animals, so the answer to this question should focus on them. It seems to me that the answer largely depends on how we understand “goes well for humans”, and what we expect the counterfactual to be.
So in what are the possible scenarios?
AGI empowers humans to make their own decisions, and to make better decisions. I expect this would greatly accelerate progress toward helping wild animals. This would be great.
AGI replaces human decision-making. It then either:
Reasons further from a starting point of human values, removing biases and inconsistencies—which I think would lead it to care more about animals.
Or it could just lock in current human values.
And what’s the counterfactual?
A continuation of the world as it is today: one where humanity gradually cares more and more about animal welfare, and in which there is at least a potential for caring about wild animals to be normalized. In this case, scenarios 1 and 2(a) seem good, but 2(b) seems very bad.
A world in which the WAW movement fails. In this case even 2(b) doesn’t look that bad, but 1 and 2(a) seem very good.
I’m not sure if this is complete. I’m also not sure how to assign probabilities—I don’t think I know enough about AGI. But tentatively, I expect scenario 2 to be most likely, with (a) and (b) roughly equal, and counterfactual 1 to be most likely. For that reason I’m going with 20% likely to be good.
But I want to say that I would not take a 20% bet of winning everything vs losing everything, and this feels very close. I think this is a terrible gamble and we shouldn’t do it. I hope that the debate results won’t be understood as EAs saying that this is a bet worth taking.
[I realise I misremembered Horta & Teran’s argument, so I edited that comment now]
I agree that people at WAI might have opinions about how one should do ecosystem restoration, but I doubt they would express them publicly because such such opinions are highly speculative at this stage. Maybe @mal_graham🔸 can correct me if I’m mistaken!I think present and future WAW advocates would fiercely disagree about what ecosystems might be net good/bad, and any intervention aimed at making greening more likely would be highly controversial.
I suppose this is true, given different intuitions about population ethics. But 1) at some point these disagreements need to be overcome—so maybe we just need to take some moral uncertainty approach—and 2) maybe I’m optimistic that progress will even reduce the disagreements on these matters. I also think that a decision will be made on these matters one way or the other, so WAW really ought to make a call about pop. ethics questions and then try to influence the decision in the way that seems best.
But I can also imagine that in other case the decision might be simpler, e.g. promoting indigenous trees in a given area might not radically increase or decrease the number of sentient beings, but might greatly change the welfare profile of the ecosystem.
Whatever the incentive for restoration is, it seems far stronger than the incentive to please the few detractors who do not want the landscape restored.
Incentives will vary depending on the context! For example, the regeneration of forest is actively opposed in much of Central Europe, because people have cultural ideas about what the landscape should look like. So there’s a tension there between environmentalists and traditionalists, and I wouldn’t say that the environmentalists are winning.
The situation I’m thinking of is not necessarily ecosystem restoration. It’s changing one ecosystem to another (although admittedly, most ecological restoration is exactly that). But so the relevant question is whether one ecosystem-type has a higher level of welfare than another.
But yes, some such activities are happening anyway, such as desert greening—and we might be able to promote or oppose them, depending on whether they seem welfare-promoting or not. Since these activities are happening anyway, and usually aren’t heavily politicised, I see no reason why some activism couldn’t influence things one way or the other (e.g. by providing environmental reasons to encourage changes like desert greening, or leveraging conservative valuing of traditional landscapes to oppose it). Are there particular reasons why you’re skeptical?
WAI to my knowledge doesn’t discuss many interventions—they are positioning themselves as a science-promotion organization, not as an advocacy organization. My understanding is they want this to be taken seriously as a field of scientific study, and so they are avoiding promoting interventions for which there isn’t solid data. And this is definitely something for which we don’t yet have good data
Hi Jim, thanks for pushing back on this! To be honest, this was the intervention I’m least confident. I got the idea from this article by Horta & Teran,
where they argue that ecosystems involving large herbivores such as elephants are likely to be higher average welfare than ecosystems without them, since large herbivores break down a lot of biomass, leaving less for smaller, faster-producing animals. I think that they are overconfident in their claim—as I point out in the full paper, it’s not clear that elephants always have this effect. But still, I’m optimistic that within the next 50-100 years we might have enough info to make these kind of calls. Admittedly, not as soon as some of the other interventions.
But is your point more about the social/political challenge? I’m not aware of collaborations between restoration scientists and WAW scientists, so I can’t give you reasons for optimism, but I also don’t have reasons for pessimism! Do you? An intervention doesn’t even need to be framed around WAW either—you could just fund an organization to lobby for desert greening (for example) in a particular area, and they could leverage whatever arguments they’ve got.
Not sure how satisfying that is, I’m interested to hear your thoughts.
*I realize the elephant example is actually from a different paper. In the referenced paper, they give a more general argument:We may be able to make some rough predictions about the different ecosystems that different decisions would produce in the targeted area. Accordingly, we may be able to guess what kind of animals will be there in each case. Such animals might be among those who have higher survival rates and longer average lifespans, and who reproduce by having small numbers of offspring. Or they may, instead, be among those who reproduce in very large numbers and tend to die in their very early youth. The latter, who unfortunately are the majority in nature, typically have much harder lives. Their lives may be so hard that they often contain more suffering than pleasure.
Dang, this sounds really cool. So do I understand correctly that you’re all disbanding after July? I would be very interested in this kind of thing from August or September… 👀
Thanks, I’m looking forward to this! Some questions that seem worth considering to me are:
1. Is AGI likely to lock in values? (if so it’s probably bad for animals)
2. Is the answer to this question even knowable? (a lot of what I’ve heard on the topic has been like “AI could mean X but also not X”)
3. If AGI is good/bad, how steerable is it? (e.g. maybe making sure that AGI goes well for humans is actually much easier)
I think these are fair points, but the tone seems deconstructive and a bit condescending. I think it’s possible to disagree and to caution loudly while still respecting that the post was made in good faith.
For what it’s worth I’m also surprised by the reaction. Within government departments in NZ (where I worked before) this is not allowed. Of course it still happens but it seems good to me for the organization to discourage it.
*Edit for spelling
Want to add this here: https://www.reddit.com/r/dataisbeautiful/comments/1rhv521/oc_dietary_v_nondietary_veganism_interest_over_16/
Reddit might not be the best source of data. But it confirms what I’ve heard elsewhere that 2018-2020 was the height of veganism as a health craze, and at least indicates that ethical vegans (if they are reflected by those who buy vegan clothes) are still rising.
I think this is a super important question and want way more conversation about it—but could we re-frame your conclusion as being not that we shouldn’t use AI, but should be mindful about how we’re using AI?
The scenario you described appears to be a pretty bad use. But I think much of the harm you’re seeing could be mitigated. Here are some ideas, just off the top of my head, addressing the issues you listed (in order):Use of AI in research should -
Consider the appropriateness of AI in that context (e.g. is this an area where we need the most up-to-date answers? Is this an area where we want to consider non-western perspectives?)
Approach AI-generated answers critically, treating them as vibes-based answers rather than having any authority (and in group-work contexts, leaders should encourage this)
Have AI write up its answers in bullet points rather than full text, so that a human is always contributing to the style
Be a second or later-resort option (try to think creatively/critically first, rather than relying on AI—again, leaders can encourage this)
In group settings: encourage new or unusual ideas (addresses the last two points).
I know these are far from perfect solutions. Point 4 is admittedly quite hard to keep up (I feel myself struggling with this). But to me it feels similar to how a calculator makes people lazy (I’m sure I can’t do mental arithmetic now as well as I could when I was 12), but is still a net win. It seems likely that if we create good habits/culture about using AI, its benefits can significantly outweigh the downsides, even in research.[1] But I do think that requires a lot of conversations, and maybe some research, into how to use it well and avoid those pitfalls. So I would love to see more posts discussing this.
- ^
I think these benefits are pretty significant. For instance, (and as a counter-point to 5), I find AI can actually help to reign in crazy ideas, by acting as a sanity-check tool; I also find it’s helpful to quickly spot holes in an argument when otherwise I would have only gotten feedback from a colleague some days later; and it can quickly structure disorganized ideas. But surely there are many more.
By power I mean: ability to change the world, according to one’s preferences. Humans clearly dominate today in terms of this kind of power. Our power is limited, but it is not the case that other organisms have power over us, because while we might rely on them, they are not able to leverage that dependency. Rather, we use them as much as we can.
No human is currently so powerful as to have power over all other humans, and I think that’s definitely a good thing. But it doesn’t seem like it would take much more advantage to let one intelligent being dominate all others.
The argument I’m referring to is the AI doom argument. Y&S are its most prominent proponents, but are widely known to be eccentric and not everyone agrees with their presentation of it. I’m not that deep in the AI safety space myself, but I think that’s pretty clear.
The authors of this post seemed to respond to the AI doom argument more generally, and took the book to be the best representative of the argument. So that already seems like a questionable move, and I wish they’d gone further.
I don’t think the point about alien preferences is a crux of the AI doom argument generally. I think it it’s presented in Bostrom’s Superintelligence and Rob Miles videos (and surely countless other places) as: “an ASI optimising for anything that doesn’t fully capture collective human preferences would be disastrous. Since we can’t define collective human preferences, this spells disaster.” In that sense it doesn’t have to be ‘alien’, just different from the collective sum of human preferences. I guess Y&S took the opportunity to say “LLMs seem MUCH more different” in an attempt to strengthen their argument, but they didn’t have to.
So, as I said, I’m not really that deep into AI safety, so I’m not the person to go to for the best version of these arguments. But I read the book, sat down with some friends to discuss it… and we each identified flaws, as the authors of this post did, and then found ways to make the argument better, using other ideas we’d been exposed to and some critical reflection. It would have been really nice if the authors of the post had made that second step and steelmanned it a bit.
Thanks Yarrow, I can see that that was confusing.
I don’t think that Yudkowsky & Soares’s argument as a whole is obviously wrong and uninteresting. On the contrary, I’m rather convinced by it, and I also want more critics to engage with it.
But I think the argument presented in the book was not particularly strong, and others seem to agree: the reviews on this forum are pretty mixed (e.g.). So I’d prefer critics to argue against the best version of this argument, not just the one presented in the book. If these critics had only set out to write a book review, then I’d say fine. But that’s not what they were doing here. They write “there is no standard argument to respond to, no single text that unifies the AI safety community”—true, but you can engage with multiple texts in order to respond to the best form of the argument. In fact that’s pretty standard, in academia and outside of it.
It doesn’t seem like a straw man to me when 1) the effectiveness of these interventions is evaluated against their short term impact (as far as I’m aware ACE doesn’t consider this kind of long term impact much at all), and 2) the orgs don’t publish any long term theory of change to help donors or critics decide if they agree with it. This strongly implies that their long term theory of change is far less important than the short term wins, at least at the organization level.
Just out of interest, do you believe that animal welfare wins are moving us AWAY from abolition? I agree with you that it’s possible but I haven’t ever seen any evidence that there is this effect. It also seems very possible to have incremental improvements and then eventually abolition, as people become more empathetic and aware.
Sorry I’m late to comment here, and I’m aware you’ve written a lot on this topic. But I think this post would benefit from an explanation as to why you’re using neuron counts as a proxy for the importance of the animal’s welfare.
As far as I’m aware, neuron count is not considered to be a good proxy or indicator of sentience, nor does it seem to be a good proxy for the intensity of experience (I’m not even aware of any good reason for assuming a difference in welfare range between sentient species, although I’m aware that this position is commonly held). Regarding the simple question of whether they’re sentient, wouldn’t it make more sense to base this on current evidence for sentience, or reasonable assumptions about what evidence future sentience research might produce, given the characteristics of these species?
I think the evolution analogy becomes relevant again here: consider that the genus Homo was at first more intelligent than other species but not more powerful than their numbers combined… until suddenly one jump in intelligence let homo sapiens wreak havoc across the globe. Similarly, there might be a tipping point in AI intelligence where fighting back becomes very suddenly infeasible. I think this is a much better analogy than Elon Musk, because like an evolving species a superintelligent AI can multiply and self-improve.
I think a good point that Y&S make is that we shouldn’t expect to know where the point of no return is, and should be prudent enough to stop well before it. I suppose you must have some source/reason for the 0.001% confidence claim, but it seems pretty wild to me to be so confident in a field like that is evolving and—at least from my perspective—pretty hard to understand.
It seems to me that the ‘alien preferences’ argument is a red herring. Humans have all kinds of different preferences—only some of ours overlap, and I have no doubt that if one human became superintelligent that would also have a high risk of disaster, precisely because they would have preferences that I don’t share (probably selfish ones). So they don’t need to be alien in any strong sense to be dangerous.
I know it’s Y&S’s argument. But it would have been nice if the authors of this article had also tried to make it stronger before refuting it.
Copying my response from your other comment:
Does that mean you think it’s likely that we will spread to other planets without spreading ecosystems? If we spread ecosystems it seems likely that we would also spread at least some wild animals. And I think we have good reasons to do so—to promote good atmospheres and other ecosystem services.
I feel pretty skeptical that humans capable of going to other galaxies would not have realized the inefficiencies of meat and would still not have made competitive substitutes.