Thanks for writing this! My sense from talking to non-EAs about longtermism is that most buy into asymmetric views of population ethics. I’m not sure what you say here will be very reassuring to them:
“Longtermism is a big tent, and includes room for “asymmetric” views of population ethics on which additional miserable lives are bad, but additional happy lives are not good but merely neutral. Such views still imply that we should be concerned about the risk of dystopian futures containing immense suffering (or “S-risks”). If there is a non-trivial chance of such S-risks eventuating, reducing these risks should plausibly be a key moral priority: astronomical suffering is not something to be viewed lightly, on any account.”
If you only care about S-risks, not X-risks, and still want to get longtermism, you need to think that the level of suffering in the future could be much greater than the level of suffering at present, such that our diminished ability to prevent future suffering is offset by the scale of that suffering. In other words, if you think that there is already astronomical suffering in the world, due to, e.g., the tens of billions of factory-farmed animals living lives full of suffering, then you have to think that there is a “non-trivial” chance of a far more dystopian future in order to be a longtermist. It’s pretty understandable to me why these people would think that we should work on fixing the dystopia we’re already in rather than working to prevent a theoretically worse dystopia. I would probably tweak the language in the above paragraph to acknowledge that.
Separately, I didn’t read the whole paper, so maybe you say this somewhere, but it might be worth mentioning that you don’t need longtermism to think that many of the “longtermist” things EAs are working on (e.g., preventing pandemics; reducing AI risk) are worth working on.
’that the level of suffering in the future could be much greater than the level of suffering at present’
When you say “level” here, did you mean “amount”? If you think that people will suffer the same amount per person, or even less per person in the future, but also that there will be far more future people than current people, and you can improve things for a large fraction of the future people, you can still get the result that you will reduce suffering more by working on long-termist stuff than by working on present stuff.
Thanks for writing this! My sense from talking to non-EAs about longtermism is that most buy into asymmetric views of population ethics. I’m not sure what you say here will be very reassuring to them:
If you only care about S-risks, not X-risks, and still want to get longtermism, you need to think that the level of suffering in the future could be much greater than the level of suffering at present, such that our diminished ability to prevent future suffering is offset by the scale of that suffering. In other words, if you think that there is already astronomical suffering in the world, due to, e.g., the tens of billions of factory-farmed animals living lives full of suffering, then you have to think that there is a “non-trivial” chance of a far more dystopian future in order to be a longtermist. It’s pretty understandable to me why these people would think that we should work on fixing the dystopia we’re already in rather than working to prevent a theoretically worse dystopia. I would probably tweak the language in the above paragraph to acknowledge that.
Separately, I didn’t read the whole paper, so maybe you say this somewhere, but it might be worth mentioning that you don’t need longtermism to think that many of the “longtermist” things EAs are working on (e.g., preventing pandemics; reducing AI risk) are worth working on.
Thanks again for writing this!
’that the level of suffering in the future could be much greater than the level of suffering at present’
When you say “level” here, did you mean “amount”? If you think that people will suffer the same amount per person, or even less per person in the future, but also that there will be far more future people than current people, and you can improve things for a large fraction of the future people, you can still get the result that you will reduce suffering more by working on long-termist stuff than by working on present stuff.
Yes, I meant amount.
Thanks, I appreciate the helpful suggestions!