I mean that, if you assume a broadly longtermist stance, no matter your ethical theory, you should be most worried about humanity not continuing to exist because life might exist elsewhere and we’re still the most capable species known, so we might be able to help currently unkown moral patients (either far away from us in space or in time).
So in the end, you’ll want to push humanity’s development as robustly as possible to maximize the chances of future good/minimize the chances of future harm. It then seems a question of empirics, or rather epistemics, not ethics, which projects to give which amount of resources to.
Why do you think SFE(-ish) people would expect working to prevent extinction to be the best way to reduce s-risks, including better than s-risk prioritization research, s-risk capacity building, incidental s-risk-focused work, and agential s-risk-focused work?
The argument for helping aliens depends on their distance from us over time. Under what kinds of credences do you think it dominates other forms of s-risk reduction? Do you think it’s unreasonable to hold credences under which it doesn’t dominate? Or that under most SFE(-ish) people’s credences, it should dominate? Why?
Why would they expect working to prevent extinction to prevent more harm/loss than it causes? It sounds like you’re assuming away s-risks from conflicts and threats, or assuming that we’ll prevent more of the harm from these (and other s-risks) than we’ll cause as we advance technologically, expand and interact with others. Why should we expect this? Getting closer to aliens may increase s-risk overall. EDIT: Also, it assumes that our descendants won’t be the ones spreading suffering.
Why do you think SFE(-ish) people would expect working to prevent extinction to be the best way to reduce s-risks, including better than s-risk prioritization research, s-risk capacity building, incidental s-risk-focused work, and agential s-risk-focused work?
The argument for helping aliens depends on their distance from us over time. Under what kinds of credences do you think it dominates other forms of s-risk reduction? Do you think it’s unreasonable to hold credences under which it doesn’t dominate? Or that under most SFE(-ish) people’s credences, it should dominate? Why?
Why would they expect working to prevent extinction to prevent more harm/loss than it causes? It sounds like you’re assuming away s-risks from conflicts and threats, or assuming that we’ll prevent more of the harm from these (and other s-risks) than we’ll cause as we advance technologically, expand and interact with others. Why should we expect this? Getting closer to aliens may increase s-risk overall. EDIT: Also, it assumes that our descendants won’t be the ones spreading suffering.