Philosophy, global priorities and animal welfare research. My current specific interests include: philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals.
I’ve also done economic modelling for some animal welfare issues.
Starting my own discussion thread.
My biggest doubt for the value of extinction risk reduction is my (asymmetric) person-affecting intuitions: I don’t think it makes things better to ensure future people (or other moral patients) come to exist for their own sake or the sake of the value within their own lives. But if future people will exist, I want to make sure things go well for them. This is summarized by the slogan “Make people happy, not make happy people”.
If this holds, then extinction risk reduction saves the lives of people who would otherwise die in an extinction event, which is presumably good for them, but this is only billions of humans.[1] If we don’t go extinct, then the number of our descendant moral patients could be astronomical. It therefore seems better to prioritize our descendant moral patients conditional on our survival because there are far far more of them.
Aliens (including alien artificial intelligence) complicate the picture. We (our descendants, whether human, AI or otherwise) could
use the resources aliens would have otherwise for our purposes instead of theirs, i.e. replace them,
help them, or
harm them or be harmed by them, e.g. through conflict.
I’m interested in others’ takes on this.
And it’s not clear we want to save other animals, in case their lives are bad on average. It can also make a difference whether we’re talking about human extinction only or all animal extinction.