I’d say that we’re interested in all three of preventing outright extinction, preventing some other kind of existential catastrophe, and in trajectory changes such as moving probability mass from “okay” worlds to “very good” worlds; I would expect some non-trivial fraction of our impact to come from all of those channels. However, I’m unsure how much weight each of these scenarios should get—that depends on various complicated empirical and philosophical questions we haven’t fully investigated (e.g. “What is the probability civilization would recover from collapse of various types?” and “How morally valuable should we think it is if the culture which arises after a recovery from collapse is very different from our current culture, and that culture is the one which gets to determine the long-term future?”). In practice our grantmaking isn’t making fine-grained distinctions between these or premised on one particular channel of impact: biosecurity and pandemic preparedness grantmaking may help prevent both outright extinction and civilizational collapse scenarios, AI alignment grantmaking may help prevent outright extinction or help make an “ok” future into a “great” one, etc.
I’d say that long-termism as a view is inherently animal-inclusive (just as the animal-inclusive view inherently also cares about humans); the view places weight on humans and animals today, and humans / animals / other types of moral patients in the distant future. Often the fact that it’s animal-inclusive is less salient though, because it is concerned with the potential for creating large numbers of thriving digital minds in the future, which we often picture as more human-like than animal-like.
I think the total view on population ethics is one important route to long-termism but others are possible. For example, you could be very uncertain what you value, but reason that it would be easier to figure out what we value and realize our values if we are safer, wiser, and have access to more resources.
FWIW, I think that all matches my own views, with the minor exception that I think longtermism (as typically defined, e.g. by MacAskill) is consistent with human-centrism as well as with animal-inclusivity. (Just as it’s consistent with either intrinsically valuing only happiness and reductions in suffering or also other things like liberty and art, and consistent with weighting reducing suffering more strongly than increasing happiness or weighting them equally.)
Perhaps you meant that Open Philanthropy’s longtermist worldview is inherently animal-inclusive?
(Personally, I adopt an animal-inclusive longtermist view. I just think one can be a human-centric longtermist.)
I’d say that we’re interested in all three of preventing outright extinction, preventing some other kind of existential catastrophe, and in trajectory changes such as moving probability mass from “okay” worlds to “very good” worlds; I would expect some non-trivial fraction of our impact to come from all of those channels. However, I’m unsure how much weight each of these scenarios should get—that depends on various complicated empirical and philosophical questions we haven’t fully investigated (e.g. “What is the probability civilization would recover from collapse of various types?” and “How morally valuable should we think it is if the culture which arises after a recovery from collapse is very different from our current culture, and that culture is the one which gets to determine the long-term future?”). In practice our grantmaking isn’t making fine-grained distinctions between these or premised on one particular channel of impact: biosecurity and pandemic preparedness grantmaking may help prevent both outright extinction and civilizational collapse scenarios, AI alignment grantmaking may help prevent outright extinction or help make an “ok” future into a “great” one, etc.
I’d say that long-termism as a view is inherently animal-inclusive (just as the animal-inclusive view inherently also cares about humans); the view places weight on humans and animals today, and humans / animals / other types of moral patients in the distant future. Often the fact that it’s animal-inclusive is less salient though, because it is concerned with the potential for creating large numbers of thriving digital minds in the future, which we often picture as more human-like than animal-like.
I think the total view on population ethics is one important route to long-termism but others are possible. For example, you could be very uncertain what you value, but reason that it would be easier to figure out what we value and realize our values if we are safer, wiser, and have access to more resources.
Thanks!
FWIW, I think that all matches my own views, with the minor exception that I think longtermism (as typically defined, e.g. by MacAskill) is consistent with human-centrism as well as with animal-inclusivity. (Just as it’s consistent with either intrinsically valuing only happiness and reductions in suffering or also other things like liberty and art, and consistent with weighting reducing suffering more strongly than increasing happiness or weighting them equally.)
Perhaps you meant that Open Philanthropy’s longtermist worldview is inherently animal-inclusive?
(Personally, I adopt an animal-inclusive longtermist view. I just think one can be a human-centric longtermist.)
Yes, I meant that the version of long-termism we think about at Open Phil is animal-inclusive.