The non-humans and the long-term future tag is for posts relevant to questions such as:
To what extent (if at all) is longtermism focused only on humans?
To what extent should longtermists focus on improving wellbeing (or other outcomes) for humans? For other animals? For artificial sentiences? For something else?
Are existential risks just about humans?
Will most moral patients in the long-term future be humans? Other animals? Something else? By how large a margin?
Further reading
Anthis, Jacy Reese & Eze Paez (2021) Moral circle expansion: A promising strategy to impact the far future, Futures, vol. 130.
Baumann, Tobias (2020) Longtermism and animal advocacy, Center for Reducing Suffering, November 11.
Feinberg, Joel (1983) The rights of animals and unborn generations, in William T. Blackstone (ed.) Philosophy and Environmental Crisis, Athens, Atlanta: University of Georgia Press, pp. 43–68.
Freitas-Groff, Zach (2021) Longtermism in animal advocacy, Animal Charity Evaluators, March 31.
Owe, Andrea & Seth D. Baum (2021) Moral consideration of nonhumans in the ethics of artificial intelligence, AI and Ethics, vol. 1, pp. 517–528.
Rowe, Abraham (2020) Should longtermists mostly think about animals?, Effective Altruism Forum, January 3.
Related entries
artificial sentience | farmed animal welfare | longtermism | long-term future | moral circle expansion | moral patienthood | universe’s resources | wild animal welfare | whole brain emulation
I’m less confident about the name, description, and scope of this tag than I am about the average tag I make. Feel free to make edits (or suggestions).
Here are two alternative tag name options (though I think they’re less good than the current name):
Non-Humans and the Far Future
Longtermism and Non-Humans
I’ve just seen this tag. What’s the intended distinction between this and the moral circle expansion tag? Is it just that some actions that affect non-humans in the long-term future might not be via moral circle expansion? If that’s the case, then what’s the distinction from the s-risk tag? (As much as I welcome lots of discussion about these topics!)
FWIW, I’m actually kind-of surprised by those questions, and they make me more confident that this sort of entry is useful. One specific thing I find problematic is how often the relevance of non-humans to long-term future stuff is seen as entirely a matter of MCE and/or s-risks, and then there’s a totally separate discussion about other interventions, other risks (e.g., extinction risks), how good the future could be if things go well, etc.
Here are some quick notes on how the following points from the entry are not just about MCE or s-risks:
To what extent (if at all) is longtermism focused only on humans?
This could also be about things like whether current longtermists are motivated by consideration of non-humans, which is something more like current moral circles than moral circle expansion.
And moral circles is largely a matter of moral views; this is also substantially about empirical views, like which beings will exist and in what numbers and with what experiences.
To what extent should longtermists focus on improving wellbeing (or other outcomes) for humans? For other animals? For artificial sentiences? For something else?
Moral circle expansion may help achieve these goals, or may not, and other things may achieve them too
Other potentially relevant variables include the long reflection, epistemics, space governance, and authoritarianism
And which beings we should focus on is partly a question of which beings will exist, in what numbers, and with what experiences, and how tractable and neglected affecting their wellbeing will be
Are existential risks just about humans?
I’d actually guess that most of the badness of existential risks even other than s-risks is from effects on beings other than humans
Perhaps especially human-like digital minds, but also potentially “weirder” digital minds and/or wild animals on terraformed planets
E.g., if we could create vast good experiences for these beings but instead we go extinct or face unrecoverable collapse or dystopia, a big part of the badness could be because of what these beings (not humans) could’ve experienced
S-risks are of course also relevant here, but isn’t the only issue
Will most moral patients in the long-term future be humans? Other animals? Something else? By how large a margin?
(As noted above, this is to a substantial extent an empirical question)
It seems to me that it should be clear how an entry with this title should be distinct from an entry on “the attempt to expand the perceived boundaries of the category of moral patients” and an entry on “a risk involving the creation of suffering on an astronomical scale”? (Copying the descriptions from the MCE and s-risk tags.)
But I’m of course also open to suggestions on how to make the distinctions clearer.
Ah sounds like most of those things relate to questions around maximising good experiences of future nonhumans rather than minimising bad experiences. That makes sense, not sure why I didn’t think of that, might have been having a mind blank. So thanks for explaining.
Fwiw it seemed obvious that this tag was in principle broader than the MCE tag, I just couldn’t think of instances where this tag would apply but neither the MCE nor s-risk tag would apply. (And I already feel there’s a lot of overlap between tags, e.g. should I tag something I write about farmed animals as being about MCE and s-risks)
On overlap between tags and when to apply tags, the tagging guidelines say:
So I think if a post is primarily about framed animals but does have e.g. a paragraph explicitly about MCE or s-risks, then it should get those tags. If it’s just that you believe the post is relevant to MCE or s-risks, but the post doesn’t really make the reasoning for that clear to the reader, I think it shouldn’t get the tag. (There would probably be trickier cases in between, and if you notice some it might be helpful to comment on the tagging guidelines page so the guidelines can be clarified in light of them.)