Iād be interested to hear about what you or Open Phil include (and prioritise) within the ālongtermismā bucket. In particular, Iām interested in things like:
When you/āOpen Phil talk about existential risk, are you (1) almost entirely concerned about extinction risk specifically, (2) mostly concerned about extinction risk specifically, or (3) somewhat similar concerned about extinction risk and other existential risks (i.e., risks of unrecoverable collapse or unrecoverable dystopias)?
When you/āOpen Phil talk about longtermism, are you (1) almost entirely focused on things that directly or indirectly reduce existential risk or existential risk factors, or (2) also quite interested in causing/āpreventing other kinds of trajectory changes[1]?
When you/āOpen Phil talk about longtermism and/āor existential risks, are you seeing that as human-centric or animal-inclusive? Or does that distinction seem irrelevant for your thinking on longtermism and/āor existential risks?
Do you/āOpen Phil see totalism (in the population ethics sense) as an essential assumption required for you to prioritise longtermism /ā existential risk reduction? Or do you see it as just one (key) pathway to such a prioritisation?
If existing write-ups already address these points, feel very free to just point me in their direction!
(The specific trigger for these questions is some of your comments on your (very interesting!) recent 80k podcast appearance. I wrote up a few of my own thoughts on those comments here.)
[1] By ācausing/āpreventing other kinds of trajectory changesā, I basically have in mind:
shifting some probability mass from āokā futures (which donāt involve existential catastrophes) to especially excellent futures, or shifting some probability mass from especially awful existential catastrophes to somewhat āless awfulā existential catastrophes. [As distinct from] shifting probability mass from āsome existential catastrophe occursā to āno existential catastrophe occursā [obnoxiously quoting myself]
Iād say that weāre interested in all three of preventing outright extinction, preventing some other kind of existential catastrophe, and in trajectory changes such as moving probability mass from āokayā worlds to āvery goodā worlds; I would expect some non-trivial fraction of our impact to come from all of those channels. However, Iām unsure how much weight each of these scenarios should getāthat depends on various complicated empirical and philosophical questions we havenāt fully investigated (e.g. āWhat is the probability civilization would recover from collapse of various types?ā and āHow morally valuable should we think it is if the culture which arises after a recovery from collapse is very different from our current culture, and that culture is the one which gets to determine the long-term future?ā). In practice our grantmaking isnāt making fine-grained distinctions between these or premised on one particular channel of impact: biosecurity and pandemic preparedness grantmaking may help prevent both outright extinction and civilizational collapse scenarios, AI alignment grantmaking may help prevent outright extinction or help make an āokā future into a āgreatā one, etc.
Iād say that long-termism as a view is inherently animal-inclusive (just as the animal-inclusive view inherently also cares about humans); the view places weight on humans and animals today, and humans /ā animals /ā other types of moral patients in the distant future. Often the fact that itās animal-inclusive is less salient though, because it is concerned with the potential for creating large numbers of thriving digital minds in the future, which we often picture as more human-like than animal-like.
I think the total view on population ethics is one important route to long-termism but others are possible. For example, you could be very uncertain what you value, but reason that it would be easier to figure out what we value and realize our values if we are safer, wiser, and have access to more resources.
FWIW, I think that all matches my own views, with the minor exception that I think longtermism (as typically defined, e.g. by MacAskill) is consistent with human-centrism as well as with animal-inclusivity. (Just as itās consistent with either intrinsically valuing only happiness and reductions in suffering or also other things like liberty and art, and consistent with weighting reducing suffering more strongly than increasing happiness or weighting them equally.)
Perhaps you meant that Open Philanthropyās longtermist worldview is inherently animal-inclusive?
(Personally, I adopt an animal-inclusive longtermist view. I just think one can be a human-centric longtermist.)
Thanks for doing this AMA!
Iād be interested to hear about what you or Open Phil include (and prioritise) within the ālongtermismā bucket. In particular, Iām interested in things like:
When you/āOpen Phil talk about existential risk, are you (1) almost entirely concerned about extinction risk specifically, (2) mostly concerned about extinction risk specifically, or (3) somewhat similar concerned about extinction risk and other existential risks (i.e., risks of unrecoverable collapse or unrecoverable dystopias)?
When you/āOpen Phil talk about longtermism, are you (1) almost entirely focused on things that directly or indirectly reduce existential risk or existential risk factors, or (2) also quite interested in causing/āpreventing other kinds of trajectory changes[1]?
When you/āOpen Phil talk about longtermism and/āor existential risks, are you seeing that as human-centric or animal-inclusive? Or does that distinction seem irrelevant for your thinking on longtermism and/āor existential risks?
Do you/āOpen Phil see totalism (in the population ethics sense) as an essential assumption required for you to prioritise longtermism /ā existential risk reduction? Or do you see it as just one (key) pathway to such a prioritisation?
If existing write-ups already address these points, feel very free to just point me in their direction!
(The specific trigger for these questions is some of your comments on your (very interesting!) recent 80k podcast appearance. I wrote up a few of my own thoughts on those comments here.)
[1] By ācausing/āpreventing other kinds of trajectory changesā, I basically have in mind:
Iād say that weāre interested in all three of preventing outright extinction, preventing some other kind of existential catastrophe, and in trajectory changes such as moving probability mass from āokayā worlds to āvery goodā worlds; I would expect some non-trivial fraction of our impact to come from all of those channels. However, Iām unsure how much weight each of these scenarios should getāthat depends on various complicated empirical and philosophical questions we havenāt fully investigated (e.g. āWhat is the probability civilization would recover from collapse of various types?ā and āHow morally valuable should we think it is if the culture which arises after a recovery from collapse is very different from our current culture, and that culture is the one which gets to determine the long-term future?ā). In practice our grantmaking isnāt making fine-grained distinctions between these or premised on one particular channel of impact: biosecurity and pandemic preparedness grantmaking may help prevent both outright extinction and civilizational collapse scenarios, AI alignment grantmaking may help prevent outright extinction or help make an āokā future into a āgreatā one, etc.
Iād say that long-termism as a view is inherently animal-inclusive (just as the animal-inclusive view inherently also cares about humans); the view places weight on humans and animals today, and humans /ā animals /ā other types of moral patients in the distant future. Often the fact that itās animal-inclusive is less salient though, because it is concerned with the potential for creating large numbers of thriving digital minds in the future, which we often picture as more human-like than animal-like.
I think the total view on population ethics is one important route to long-termism but others are possible. For example, you could be very uncertain what you value, but reason that it would be easier to figure out what we value and realize our values if we are safer, wiser, and have access to more resources.
Thanks!
FWIW, I think that all matches my own views, with the minor exception that I think longtermism (as typically defined, e.g. by MacAskill) is consistent with human-centrism as well as with animal-inclusivity. (Just as itās consistent with either intrinsically valuing only happiness and reductions in suffering or also other things like liberty and art, and consistent with weighting reducing suffering more strongly than increasing happiness or weighting them equally.)
Perhaps you meant that Open Philanthropyās longtermist worldview is inherently animal-inclusive?
(Personally, I adopt an animal-inclusive longtermist view. I just think one can be a human-centric longtermist.)
Yes, I meant that the version of long-termism we think about at Open Phil is animal-inclusive.