Thanks for making this linkpost, Evelyn! I did have some thoughts on this episode, which I’ll split into separate comments so it’s easier to keep discussion organised. (A basic point is that the episode was really interesting, and I’d recommend others listen as well.)
A bundle of connected quibbles:
Ajeya seems to use the term “existential risk” when meaning just “extinction risk”
She seems to imply longtermism is only/necessarily focused on existential risk reduction
(And I disagree with those things.)
An illustrative quote from Ajeya:
I think I would characterise the longtermist camp as the camp that wants to go all the way with buying into the total view — which says that creating new people is good — and then take that to its logical conclusion, which says that bigger worlds are better, bigger worlds full of people living happy lives are better — and then take that to its logical conclusion, which basically says that because the potential for really huge populations is so much greater in the future — particularly with the opportunity for space colonisation — we should focus almost all of our energies on preserving the option of having that large future. So, we should be focusing on reducing existential risks.
But “existential risks” includes not just extinction risk but also includes risks of unrecoverable collapse, unrecoverable dystopia, and some (but not all) s-risks/suffering catastrophes. (See here.)
And my understanding is that, if we condition on rejecting the totalism:
Risk of unrecoverable collapse probably becomes way less important (though this is a bit less clear)
Risk of unrecoverable dystopia and s-risks still retain much of their importance
(See here for some discussion relevant to those points.)
So one can reasonably be a non-totalist yet still prioritise reducing existential risk—especially risk of unrecoverable dystopias.
Relatedly, a fair number of longtermists are suffering-focused and/or prioritise s-risk reduction, sometimes precisely because they reject the idea that making more happy beings is good but do think making more suffering beings is bad.
Finally, one can be a longtermist without prioritising reducing either reduction of extinction risk or reducing of other existential risks. In particular, one could prioritise work on what I’m inclined to call “non-existential trajectory changes”. From a prior post of mine:
But what if some of humanity’s long-term potential is destroyed, but not the vast majority of it? Given Ord and Bostrom’s definitions, I think that the risk of that should not be called an existential risk, and that its occurrence should not be called an existential catastrophe. Instead, I’d put such possibilities alongside existential catastrophes in the broader category of things that could cause “Persistent trajectory changes”. More specifically, I’d put them in a category I’ll term in an upcoming post “non-existential trajectory changes”. (Note that “non-existential” does not mean “not important”.)
(Relatedly, my impression from a couple videos or podcasts is that Will MacAskill is currently interested in thinking more about a broad set of trajectory changes longtermists could try to cause/prevent, including but not limited to existential catastrophes.)
I expect Ajeya knows all these things. And I think it’s reasonable for a person to think that extinction risks are far more important than other existential risks, that the strongest argument for longtermism rests on totalism, and that longtermists should only/almost only prioritise existential/extinction risk reduction. (My own views are probably more moderate versions of those stances.) But it seems to me that it’s valuable to not imply that those things are necessarily true or true by definition.
(Though it’s of course easy to state things in ways that are less than perfectly accurate or nuanced when speaking in an interview rather than producing edited, written content. And I did find a lot of the rest of that section of the interview quite interesting and useful.)
Somewhat relatedly, Ajeya seems to sort-of imply that “the animal-inclusive worldview” is necessarily neartermist, and that “the longtermist worldview” is necessarily human-centric. For example, the above quote about longtermism focuses on “people”, which I think would typically be interpreted as just meaning humans, and as very likely excluding at least some beings that might be moral patients (e.g., insects). And later she says:
And then within the near-termism camp, there’s a very analogous question of, are we inclusive of animals or not?
But I think the questions of neartermism vs longtermism and animal-inclusivity vs human-centrism are actually fairly distinct. Indeed, I consider myself an animal-inclusive longtermist.
I do think it’s reasonable to be a human-centric longtermist. And I do tentatively think that even animal-inclusive longtermism should still prioritise existential risks, and still with extinction risks as a/the main focus within that.
But I think animal-inclusivity makes at least some difference(e.g., pushing a bit in favour of prioritising reducing risks of unrecoverable dystopias). And it might make a larger difference. And in any case, it seems worth avoiding implying that all longtermists must be focused only or primarily on benefitting humans, since that isn’t accurate.
(But as with my above comment, I expect that Ajeya knows these things, and that the fact she was speaking rather than producing edited written content is relevant here.)
I haven’t finished listening to the podcast episode yet but I picked up on a few of these inaccuracies and was disappointed to hear them. As you say I would be surprised if Ajeya isn’t aware of these things. Anyone who has read Greaves and MacAskill’s paper The Case for Strong Longtermism should know that longtermism doesn’t necessarily mean a focus on reducing x-risk, and that it is at least plausible that longtermism is not conditional on a total utilitarianism population axiology*.
However, given that many people listening to the show might not have read that paper, I feel these inaccuracies are important and might mislead people. If longtermism is robust to different views (or at least if this is plausible), then it is very important for EAs to be aware of this. I think that it is important for EAs to be aware of anything that might be important in deciding between cause areas, given the potentially vast differences in value between them.
*Even the importance of reducing extinction risk isn’t conditional on total utilitarianism. For example, it could be vastly important under average utilitarianism if we expect the future to be good, conditional on humans not going extinct. That said, I’m not sure how many people take average utilitarianism seriously.
Update: I sort-of adapted this comment into a question for Ajeya’s AMA, and her answer clarifies her views. (It seems like her and I do in fact basically agree on all of these points.)
Thanks for making this linkpost, Evelyn! I did have some thoughts on this episode, which I’ll split into separate comments so it’s easier to keep discussion organised. (A basic point is that the episode was really interesting, and I’d recommend others listen as well.)
A bundle of connected quibbles:
Ajeya seems to use the term “existential risk” when meaning just “extinction risk”
She seems to imply totalism is necessary for longtermism
She seems to imply longtermism is only/necessarily focused on existential risk reduction
(And I disagree with those things.)
An illustrative quote from Ajeya:
But “existential risks” includes not just extinction risk but also includes risks of unrecoverable collapse, unrecoverable dystopia, and some (but not all) s-risks/suffering catastrophes. (See here.)
And my understanding is that, if we condition on rejecting the totalism:
Risk of extinction does becomes way less important
Though it’d still matter due to its effects on the present generation
Risk of unrecoverable collapse probably becomes way less important (though this is a bit less clear)
Risk of unrecoverable dystopia and s-risks still retain much of their importance
(See here for some discussion relevant to those points.)
So one can reasonably be a non-totalist yet still prioritise reducing existential risk—especially risk of unrecoverable dystopias.
Relatedly, a fair number of longtermists are suffering-focused and/or prioritise s-risk reduction, sometimes precisely because they reject the idea that making more happy beings is good but do think making more suffering beings is bad.
Finally, one can be a longtermist without prioritising reducing either reduction of extinction risk or reducing of other existential risks. In particular, one could prioritise work on what I’m inclined to call “non-existential trajectory changes”. From a prior post of mine:
(Relatedly, my impression from a couple videos or podcasts is that Will MacAskill is currently interested in thinking more about a broad set of trajectory changes longtermists could try to cause/prevent, including but not limited to existential catastrophes.)
I expect Ajeya knows all these things. And I think it’s reasonable for a person to think that extinction risks are far more important than other existential risks, that the strongest argument for longtermism rests on totalism, and that longtermists should only/almost only prioritise existential/extinction risk reduction. (My own views are probably more moderate versions of those stances.) But it seems to me that it’s valuable to not imply that those things are necessarily true or true by definition.
(Though it’s of course easy to state things in ways that are less than perfectly accurate or nuanced when speaking in an interview rather than producing edited, written content. And I did find a lot of the rest of that section of the interview quite interesting and useful.)
Somewhat relatedly, Ajeya seems to sort-of imply that “the animal-inclusive worldview” is necessarily neartermist, and that “the longtermist worldview” is necessarily human-centric. For example, the above quote about longtermism focuses on “people”, which I think would typically be interpreted as just meaning humans, and as very likely excluding at least some beings that might be moral patients (e.g., insects). And later she says:
But I think the questions of neartermism vs longtermism and animal-inclusivity vs human-centrism are actually fairly distinct. Indeed, I consider myself an animal-inclusive longtermist.
I do think it’s reasonable to be a human-centric longtermist. And I do tentatively think that even animal-inclusive longtermism should still prioritise existential risks, and still with extinction risks as a/the main focus within that.
But I think animal-inclusivity makes at least some difference (e.g., pushing a bit in favour of prioritising reducing risks of unrecoverable dystopias). And it might make a larger difference. And in any case, it seems worth avoiding implying that all longtermists must be focused only or primarily on benefitting humans, since that isn’t accurate.
(But as with my above comment, I expect that Ajeya knows these things, and that the fact she was speaking rather than producing edited written content is relevant here.)
I haven’t finished listening to the podcast episode yet but I picked up on a few of these inaccuracies and was disappointed to hear them. As you say I would be surprised if Ajeya isn’t aware of these things. Anyone who has read Greaves and MacAskill’s paper The Case for Strong Longtermism should know that longtermism doesn’t necessarily mean a focus on reducing x-risk, and that it is at least plausible that longtermism is not conditional on a total utilitarianism population axiology*.
However, given that many people listening to the show might not have read that paper, I feel these inaccuracies are important and might mislead people. If longtermism is robust to different views (or at least if this is plausible), then it is very important for EAs to be aware of this. I think that it is important for EAs to be aware of anything that might be important in deciding between cause areas, given the potentially vast differences in value between them.
*Even the importance of reducing extinction risk isn’t conditional on total utilitarianism. For example, it could be vastly important under average utilitarianism if we expect the future to be good, conditional on humans not going extinct. That said, I’m not sure how many people take average utilitarianism seriously.
Update: I sort-of adapted this comment into a question for Ajeya’s AMA, and her answer clarifies her views. (It seems like her and I do in fact basically agree on all of these points.)
Thank you for writing this critique, it was a thought I had while listening as well. In my experience many EAs make the same mistake, not just Ajeya.