This really awesome and helpful! Thanks Saulius!
One group that is probably pretty small but isn’t listed here—animals in wildlife rehabilitation clinics: this page says 8k to 9k animals (I’m guessing mostly vertebrates?) enter clinics in Minnesota every year. If that scales by land area for the contiguous United States, that would be 270k − 305k animals per year in the US, so maybe a few million globally? But that’s just a guess from the first good source I saw.
On pet shelters—I used to work at one, and every month, we reported our current animal population (along with a lot of other stats), to this organization - https://shelteranimalscount.org/ - I think their data could probably be used to get a very accurate estimate of animals currently in shelters in the US.
Yeah I think that is right that it is a conservative scenario—my point was more, the proposed future scenarios don’t come close to imagining as much welfare / mind-stuff as might exist right now.
Hmm, I think my point might be something slightly different—more to pose a challenge to explore how taking animal welfare seriously might change the outcomes of conclusions about the long term future. Right now, there seems to be almost no consideration. I guess I think it is likely that many longtermists thinks animals matter morally already (given the popularity of such a view in EA). But I take your point that for general longtermist outreach, this might be a less appealing discussion topic.
Thanks for the thoughts Brian!
Yeah, the idea of looking into longtermism for nonutilitarians is interesting to me. Thanks for the suggestion!
I think regardless, this helped clarify a lot of things for me about particular beliefs longtermists might hold (to various degrees). Thanks!
That makes sense!
Is the deadline at a specific time on February 6th, or before the 6th (i.e. EOD the 5th)? The wording is just slightly vague.
Thanks for all you do!
Thanks for the feedback—that’s a good rule of thumb!
Thanks for laying out this response! It was really interesting, and I think probably a good reason to not take animals as seriously as I suggest you ought to, if you hold these beliefs.
I think something interesting is that this, and the other objections that have been presented to my piece have brought out is that to avoid focusing exclusively on animals in longtermist projects, you have to have some level of faith in these science-fiction scenarios happening. I don’t necessarily think that is a bad thing, but it isn’t something that’s been made explicit in past discussions of long-termism (at least, in the academic literature), and perhaps ought to be explicit?
A few comments on your two arguments:
Claim: Our descendants may wish to optimize for positive moral goods.
I think this is a precondition for EAs and do-gooders in general “winning”, so I almost treat the possibility of this as a tautology.
This isn’t usually assumed in the longtermist literature. It seems more like the argument is made on the basis of future human lives being net-positive, and therefore good that there will be many of them. I think the expected value of your argument (A) hinges on this claim, so it seems like accepting it as a tautology, or something similar, is actually really risky. If you think this is basically 100% likely to be true, of course your conclusion might be true. But if you don’t, it seems plausible that that, like you mention, priority possibly ought to be on s-risks.
In general, a way to summarize this argument, and others given here could be something like, “there is a non-zero chance that we can make loads and loads of digital welfare in the future (more than exists now), so we should focus on reducing existential risk in order to ensure that future can happen”. This raises a question—when will that claim not be true / the argument you’re making not be relevant? It seems plausible that this kind of argument is a justification to work on existential risk reduction until basically the end of the universe (unless we somehow solve it with 100% certainty, etc.), because we might always assume future people will be better at producing welfare than us.
I assume people have discussed the above, and I’m not well read in the area, but it strikes me as odd that the primary justification in these sci-fi scenarios for working on the future is just a claim that can always be made, instead of working directly on making lives with good welfare (but maybe this is a consideration with longtermism in general, and not just this argument).
I guess part of the issue here is you could have an incredibly tiny credence in a very specific number of things being true (the present being at the hinge of history, various things about future sci-fi scenarios), and having those credences would always justify deferral of action.
I’m not totally sure what to make of this, but I do think it gives me pause. But, I admit I haven’t really thought about any of the above much, and don’t read in this area at all.
Thanks again for the response!
Yeah, I think it probably depends on your specific credence that artificial minds will dominate in the future. I assume that most people don’t place a value of 100% on that (especially if they think x-risks are possible prior to the invention of self-replicating digital minds, because necessarily that decreases your credence that artificial minds will dominate). I think if your credence in this claim is relatively low, which seems reasonable, it is really unclear to me that the expected value of working on human-focused x-risks is higher than that of working on animal-focused ones. There hasn’t been any attempt that I know of to compare the two, so I can’t say this with confidence though. But it is clear that saying “there might be tons of digital minds” isn’t a strong enough claim on its own, without specific credences in specific numbers of digital minds.
That’s a good point!
I think something to note is that while I think animal welfare over the long term is important, I didn’t really spend much time thinking about possible implications of this conclusion in this piece, as I was mostly focused on the justification. I think that a lot of value could be added if some research went into these kinds of considerations, or alternative implications of a longtermist view of animal welfare.
Yes, this was noted in the sentence following you quote and the paragraphs after this one. Note that if humans implemented extremely resilient interventions, human-focused x-risks might be of less value, but I broadly agree humanity’s moral personhood is a good reason to think that x-risks impacting humans are valuable to work on. Reading through my conclusions again, I could have been a bit more clear on this.
Ah—I meant human, emulated or organic, since Rob referred to emulated humans in his comment. For less morally weighty digital minds, the same questions RE emulating animal minds apply, though the terms ought to be changed.
Also it seems worth noting that much the literature on longtermism, outside Foundation Research Institute, isn’t making claims explicitly about digital minds as the primary holders of future welfare, but just focuses on the future organic human populations (Greaves and MacAskill’s paper, for example), and similar sized populations to the present day human population at that.
Admittedly, I haven’t thought about this extensively. I think that there are a variety of x-risks that might cause humans to go extinct but not animals, such as specific bio-risks, etc. And, there are x-risks that might threaten both humans and animals (a big enough asteroid?), which would fall into the group I describe. One might be just continued human development massively decreasing animal populations, if animals have net positive lives, though I think those might be unlikely.
I haven’t given enough thought to the second question, but I’d guess if you thought most the value of the future was in animal lives, and not human lives, it should change something? Especially given how focused on only preserving human welfare the long-termist community has been.
I’m not sure that even under the scenario you describe animal welfare doesn’t end up dominating human welfare, except under a very specific set of assumptions. In particular, you describe ways for human-esque minds to explode in number (propagating through space as machines or as emulations). Without appropriate efforts to change the way humans perceive animal welfare (wild animal welfare in particular), it seems very possible that 1) humans/machine descendants might manufacture/emulate animal-minds (and since wild animal welfare hasn’t been addressed, emulate their suffering), 2) animals will continue to exist and suffer on our own planet for millennia, or 3) taking an idea from Luke Hecht, there may be vastly more wild “animals” suffering already off-Earth—if we think there are human-esque alien minds, than there are probably vastly more alien wild animals. The emulated minds that descend from humans may have to address cosmic wild animal suffering.
All three of these situations mean that even when the total expected welfare of the human population is incredibly large, the total expected welfare (or potential welfare) of animals may also be incredibly large, and it isn’t easy to see in advance that one would clearly outweigh the other (unless animal life (biological and synthetic) is eradicated relatively early in the timeline compared to the propagation of human life, which is an additional assumption).
Regardless, if all situations where humans are bound to the solar system and many where they leave result in animal welfare dominating, then your credence that animal welfare will continue to dominate should necessarily be higher than your credence that humans will leave the solar system. So neglecting animal welfare on the grounds that humans will dominate via space exploration seems to require further information about the relative probabilities of the various situations, multiplied by the relative populations in these situations.
I haven’t attempted any particular expected value calculation, but it doesn’t seem to me like you can conclude immediately that simply because human welfare has the potential to be infinite or extravagantly large, the potential value of working on human welfare is definitely higher. The latter claim instead requires the additional assertion that animal welfare will not also be incredibly or infinitely large, which as I describe above requires further evidence. And, you would also have to account for the fact that wild animal welfare seems vastly more important currently and will be for the near future in that expected value calculation (which I take from your objection being focused on the future, you might already believe?).
If this is your primary objection, at best it seems like it ought to marginally lower your credence that animal welfare will continue to dominate. It strikes me as an extremely narrow possibility among many many possible worlds where animals continue to dominate welfare considerations, and therefore in expectation, we still should think animal welfare will dominate into the future. I’d be interested in what your specific credence is that the situation you outlined will happen?
This is really amazing, and it’ll be interesting to see it applied to wild animal welfare work in the future. I also imagine that there are a lot of applications for farmed animal welfare improvements, etc. Thanks for sharing!
Thanks for the response! I guess I personally am interested in it, because I think it would lend credibility to WAW outreach projects to be able to cite it.
That’s great to hear! I guess I think it would be great for norms of caring about invertebrates to be spread in the animal advocacy space, so that seems good.
I don’t actually know if engagement, is important (maybe it is an indicator of either your thoroughness, as there are few followups, or just that you all are the experts, so most people on the forum aren’t going to weigh in). Sharing with funders makes a lot of sense. Thanks!