Why should we care about people that don’t exist yet? And why should we dedicate our resources to making the world better for people that might exist (reducing x-risks) rather than using them for people that definitely exist and are currently suffering (global health, near-term global catastrophic risks, etc?) Longtermism seems to be somewhat of a privileged and exclusive worldview because it deprioritizes the very real lack of healthcare, food and potable water access, security, and education that plagues many communities.
Why are x-risks considered worse than global catastrophic risks? From a utilitarian standpoint, global catastrophic risks should be much worse than x-risks. All things considered, x-risks are a quite neutral outcome. They’re worse than a generally happy future, but highly preferable to a generally unhappy future. Global catastrophic risks would cause a generally unhappy future.
Should the long-term preservation of humanity necessarily be a goal of effective altruism? I don’t think the preservation of humanity is an inherently bad thing. (Although it would likely be at the expense of every other species.) But, I could imagine an extinction scenario that I wouldn’t be upset about: As technology progresses, people generally get richer and happier. A combination of rising GDP, more urbanized styles of living, better access to birth control, and a mechanized workforce causes the birth rate to drop, and humanity comfortably, quietly declines. Natural habitats flourish, and we make room for other species to thrive and flourish as we have. Is this outcome acceptable from an altruistic perspective? If not, why?
Why is longtermism a compelling moral worldview?
A few sub-questions:
Why should we care about people that don’t exist yet? And why should we dedicate our resources to making the world better for people that might exist (reducing x-risks) rather than using them for people that definitely exist and are currently suffering (global health, near-term global catastrophic risks, etc?) Longtermism seems to be somewhat of a privileged and exclusive worldview because it deprioritizes the very real lack of healthcare, food and potable water access, security, and education that plagues many communities.
Why are x-risks considered worse than global catastrophic risks? From a utilitarian standpoint, global catastrophic risks should be much worse than x-risks. All things considered, x-risks are a quite neutral outcome. They’re worse than a generally happy future, but highly preferable to a generally unhappy future. Global catastrophic risks would cause a generally unhappy future.
Should the long-term preservation of humanity necessarily be a goal of effective altruism? I don’t think the preservation of humanity is an inherently bad thing. (Although it would likely be at the expense of every other species.) But, I could imagine an extinction scenario that I wouldn’t be upset about: As technology progresses, people generally get richer and happier. A combination of rising GDP, more urbanized styles of living, better access to birth control, and a mechanized workforce causes the birth rate to drop, and humanity comfortably, quietly declines. Natural habitats flourish, and we make room for other species to thrive and flourish as we have. Is this outcome acceptable from an altruistic perspective? If not, why?