I haven’t finished listening to the podcast episode yet but I picked up on a few of these inaccuracies and was disappointed to hear them. As you say I would be surprised if Ajeya isn’t aware of these things. Anyone who has read Greaves and MacAskill’s paper The Case for Strong Longtermism should know that longtermism doesn’t necessarily mean a focus on reducing x-risk, and that it is at least plausible that longtermism is not conditional on a total utilitarianism population axiology*.
However, given that many people listening to the show might not have read that paper, I feel these inaccuracies are important and might mislead people. If longtermism is robust to different views (or at least if this is plausible), then it is very important for EAs to be aware of this. I think that it is important for EAs to be aware of anything that might be important in deciding between cause areas, given the potentially vast differences in value between them.
*Even the importance of reducing extinction risk isn’t conditional on total utilitarianism. For example, it could be vastly important under average utilitarianism if we expect the future to be good, conditional on humans not going extinct. That said, I’m not sure how many people take average utilitarianism seriously.
I haven’t finished listening to the podcast episode yet but I picked up on a few of these inaccuracies and was disappointed to hear them. As you say I would be surprised if Ajeya isn’t aware of these things. Anyone who has read Greaves and MacAskill’s paper The Case for Strong Longtermism should know that longtermism doesn’t necessarily mean a focus on reducing x-risk, and that it is at least plausible that longtermism is not conditional on a total utilitarianism population axiology*.
However, given that many people listening to the show might not have read that paper, I feel these inaccuracies are important and might mislead people. If longtermism is robust to different views (or at least if this is plausible), then it is very important for EAs to be aware of this. I think that it is important for EAs to be aware of anything that might be important in deciding between cause areas, given the potentially vast differences in value between them.
*Even the importance of reducing extinction risk isn’t conditional on total utilitarianism. For example, it could be vastly important under average utilitarianism if we expect the future to be good, conditional on humans not going extinct. That said, I’m not sure how many people take average utilitarianism seriously.