Thanks for making this linkpost, Evelyn! I did have some thoughts on this episode, which Iâll split into separate comments so itâs easier to keep discussion organised. (A basic point is that the episode was really interesting, and Iâd recommend others listen as well.)
A bundle of connected quibbles:
Ajeya seems to use the term âexistential riskâ when meaning just âextinction riskâ
She seems to imply longtermism is only/ânecessarily focused on existential risk reduction
(And I disagree with those things.)
An illustrative quote from Ajeya:
I think I would characterise the longtermist camp as the camp that wants to go all the way with buying into the total view â which says that creating new people is good â and then take that to its logical conclusion, which says that bigger worlds are better, bigger worlds full of people living happy lives are better â and then take that to its logical conclusion, which basically says that because the potential for really huge populations is so much greater in the future â particularly with the opportunity for space colonisation â we should focus almost all of our energies on preserving the option of having that large future. So, we should be focusing on reducing existential risks.
But âexistential risksâ includes not just extinction risk but also includes risks of unrecoverable collapse, unrecoverable dystopia, and some (but not all) s-risks/âsuffering catastrophes. (See here.)
And my understanding is that, if we condition on rejecting the totalism:
Risk of unrecoverable collapse probably becomes way less important (though this is a bit less clear)
Risk of unrecoverable dystopia and s-risks still retain much of their importance
(See here for some discussion relevant to those points.)
So one can reasonably be a non-totalist yet still prioritise reducing existential riskâespecially risk of unrecoverable dystopias.
Relatedly, a fair number of longtermists are suffering-focused and/âor prioritise s-risk reduction, sometimes precisely because they reject the idea that making more happy beings is good but do think making more suffering beings is bad.
Finally, one can be a longtermist without prioritising reducing either reduction of extinction risk or reducing of other existential risks. In particular, one could prioritise work on what Iâm inclined to call ânon-existential trajectory changesâ. From a prior post of mine:
But what if some of humanityâs long-term potential is destroyed, but not the vast majority of it? Given Ord and Bostromâs definitions, I think that the risk of that should not be called an existential risk, and that its occurrence should not be called an existential catastrophe. Instead, Iâd put such possibilities alongside existential catastrophes in the broader category of things that could cause âPersistent trajectory changesâ. More specifically, Iâd put them in a category Iâll term in an upcoming post ânon-existential trajectory changesâ. (Note that ânon-existentialâ does not mean ânot importantâ.)
(Relatedly, my impression from a couple videos or podcasts is that Will MacAskill is currently interested in thinking more about a broad set of trajectory changes longtermists could try to cause/âprevent, including but not limited to existential catastrophes.)
I expect Ajeya knows all these things. And I think itâs reasonable for a person to think that extinction risks are far more important than other existential risks, that the strongest argument for longtermism rests on totalism, and that longtermists should only/âalmost only prioritise existential/âextinction risk reduction. (My own views are probably more moderate versions of those stances.) But it seems to me that itâs valuable to not imply that those things are necessarily true or true by definition.
(Though itâs of course easy to state things in ways that are less than perfectly accurate or nuanced when speaking in an interview rather than producing edited, written content. And I did find a lot of the rest of that section of the interview quite interesting and useful.)
Somewhat relatedly, Ajeya seems to sort-of imply that âthe animal-inclusive worldviewâ is necessarily neartermist, and that âthe longtermist worldviewâ is necessarily human-centric. For example, the above quote about longtermism focuses on âpeopleâ, which I think would typically be interpreted as just meaning humans, and as very likely excluding at least some beings that might be moral patients (e.g., insects). And later she says:
And then within the near-termism camp, thereâs a very analogous question of, are we inclusive of animals or not?
But I think the questions of neartermism vs longtermism and animal-inclusivity vs human-centrism are actually fairly distinct. Indeed, I consider myself an animal-inclusive longtermist.
I do think itâs reasonable to be a human-centric longtermist. And I do tentatively think that even animal-inclusive longtermism should still prioritise existential risks, and still with extinction risks as a/âthe main focus within that.
But I think animal-inclusivity makes at least some difference(e.g., pushing a bit in favour of prioritising reducing risks of unrecoverable dystopias). And it might make a larger difference. And in any case, it seems worth avoiding implying that all longtermists must be focused only or primarily on benefitting humans, since that isnât accurate.
(But as with my above comment, I expect that Ajeya knows these things, and that the fact she was speaking rather than producing edited written content is relevant here.)
I havenât finished listening to the podcast episode yet but I picked up on a few of these inaccuracies and was disappointed to hear them. As you say I would be surprised if Ajeya isnât aware of these things. Anyone who has read Greaves and MacAskillâs paper The Case for Strong Longtermism should know that longtermism doesnât necessarily mean a focus on reducing x-risk, and that it is at least plausible that longtermism is not conditional on a total utilitarianism population axiology*.
However, given that many people listening to the show might not have read that paper, I feel these inaccuracies are important and might mislead people. If longtermism is robust to different views (or at least if this is plausible), then it is very important for EAs to be aware of this. I think that it is important for EAs to be aware of anything that might be important in deciding between cause areas, given the potentially vast differences in value between them.
*Even the importance of reducing extinction risk isnât conditional on total utilitarianism. For example, it could be vastly important under average utilitarianism if we expect the future to be good, conditional on humans not going extinct. That said, Iâm not sure how many people take average utilitarianism seriously.
Update: I sort-of adapted this comment into a question for Ajeyaâs AMA, and her answer clarifies her views. (It seems like her and I do in fact basically agree on all of these points.)
Thanks for making this linkpost, Evelyn! I did have some thoughts on this episode, which Iâll split into separate comments so itâs easier to keep discussion organised. (A basic point is that the episode was really interesting, and Iâd recommend others listen as well.)
A bundle of connected quibbles:
Ajeya seems to use the term âexistential riskâ when meaning just âextinction riskâ
She seems to imply totalism is necessary for longtermism
She seems to imply longtermism is only/ânecessarily focused on existential risk reduction
(And I disagree with those things.)
An illustrative quote from Ajeya:
But âexistential risksâ includes not just extinction risk but also includes risks of unrecoverable collapse, unrecoverable dystopia, and some (but not all) s-risks/âsuffering catastrophes. (See here.)
And my understanding is that, if we condition on rejecting the totalism:
Risk of extinction does becomes way less important
Though itâd still matter due to its effects on the present generation
Risk of unrecoverable collapse probably becomes way less important (though this is a bit less clear)
Risk of unrecoverable dystopia and s-risks still retain much of their importance
(See here for some discussion relevant to those points.)
So one can reasonably be a non-totalist yet still prioritise reducing existential riskâespecially risk of unrecoverable dystopias.
Relatedly, a fair number of longtermists are suffering-focused and/âor prioritise s-risk reduction, sometimes precisely because they reject the idea that making more happy beings is good but do think making more suffering beings is bad.
Finally, one can be a longtermist without prioritising reducing either reduction of extinction risk or reducing of other existential risks. In particular, one could prioritise work on what Iâm inclined to call ânon-existential trajectory changesâ. From a prior post of mine:
(Relatedly, my impression from a couple videos or podcasts is that Will MacAskill is currently interested in thinking more about a broad set of trajectory changes longtermists could try to cause/âprevent, including but not limited to existential catastrophes.)
I expect Ajeya knows all these things. And I think itâs reasonable for a person to think that extinction risks are far more important than other existential risks, that the strongest argument for longtermism rests on totalism, and that longtermists should only/âalmost only prioritise existential/âextinction risk reduction. (My own views are probably more moderate versions of those stances.) But it seems to me that itâs valuable to not imply that those things are necessarily true or true by definition.
(Though itâs of course easy to state things in ways that are less than perfectly accurate or nuanced when speaking in an interview rather than producing edited, written content. And I did find a lot of the rest of that section of the interview quite interesting and useful.)
Somewhat relatedly, Ajeya seems to sort-of imply that âthe animal-inclusive worldviewâ is necessarily neartermist, and that âthe longtermist worldviewâ is necessarily human-centric. For example, the above quote about longtermism focuses on âpeopleâ, which I think would typically be interpreted as just meaning humans, and as very likely excluding at least some beings that might be moral patients (e.g., insects). And later she says:
But I think the questions of neartermism vs longtermism and animal-inclusivity vs human-centrism are actually fairly distinct. Indeed, I consider myself an animal-inclusive longtermist.
I do think itâs reasonable to be a human-centric longtermist. And I do tentatively think that even animal-inclusive longtermism should still prioritise existential risks, and still with extinction risks as a/âthe main focus within that.
But I think animal-inclusivity makes at least some difference (e.g., pushing a bit in favour of prioritising reducing risks of unrecoverable dystopias). And it might make a larger difference. And in any case, it seems worth avoiding implying that all longtermists must be focused only or primarily on benefitting humans, since that isnât accurate.
(But as with my above comment, I expect that Ajeya knows these things, and that the fact she was speaking rather than producing edited written content is relevant here.)
I havenât finished listening to the podcast episode yet but I picked up on a few of these inaccuracies and was disappointed to hear them. As you say I would be surprised if Ajeya isnât aware of these things. Anyone who has read Greaves and MacAskillâs paper The Case for Strong Longtermism should know that longtermism doesnât necessarily mean a focus on reducing x-risk, and that it is at least plausible that longtermism is not conditional on a total utilitarianism population axiology*.
However, given that many people listening to the show might not have read that paper, I feel these inaccuracies are important and might mislead people. If longtermism is robust to different views (or at least if this is plausible), then it is very important for EAs to be aware of this. I think that it is important for EAs to be aware of anything that might be important in deciding between cause areas, given the potentially vast differences in value between them.
*Even the importance of reducing extinction risk isnât conditional on total utilitarianism. For example, it could be vastly important under average utilitarianism if we expect the future to be good, conditional on humans not going extinct. That said, Iâm not sure how many people take average utilitarianism seriously.
Update: I sort-of adapted this comment into a question for Ajeyaâs AMA, and her answer clarifies her views. (It seems like her and I do in fact basically agree on all of these points.)
Thank you for writing this critique, it was a thought I had while listening as well. In my experience many EAs make the same mistake, not just Ajeya.