Extinction forecloses all option value — including the option for future agents to course-correct if we’ve made mistakes. Survival preserves the ability to solve new problems. This isn’t a claim about net welfare across cosmic history; it’s a claim about preserving agency and problem-solving capacity.
I think it still implicitly is a claim about net welfare across the cosmos. You have to believe that preserving option value will actually eventually lead to higher net welfare across the cosmos[1]---belief which I argue relies on judgment calls. (And the option-value argument for x-risk reduction was kind of already infamously known as a bad one in the GPR literature, including among x-risk reducers.)
You might say individuals can act on non-longtermist grounds while remaining longtermist-clueless. But this concedes that something must break the paralysis, and I’d argue that “preserve option value / problem-solving capacity” is a principled way to do so that doesn’t require the full judgment-call apparatus you describe.
Nice, that’s the crux! Yeah so I tentatively find something like bracketing out long-term effects more principled (as a paralysis breaker) than option-value preservation. I have no clue whether reducing the agony of the many animals we can robustly help in the near term is overall good when considering the indirect long-term effects, but I find doing it anyway far more justifiable than “reducing x-risks and let future people decide what they should do”. I would prefer the latter if I bought the premises of the option-value argument for x-risk reduction, but I wouldn’t be clueless and wouldn’t have a paralysis problem to begin with, then.
I don’t see any good reason to believe enabling our descendants is impartially better than doing the exact opposite (both positions rely on judgment calls that seem arbitrary to me). However, I see good (non-longtermist) reasons to reduce near-term animal suffering rather than increase it.
Unless you intrinsically value the existence of Earth-originated agents or something, and in a way where you’re happy to ignore the welfarist considerations that may leave you clueless on their own. In this case, you obviously think reducing P(extinction) is net positive. But then,
i) you don’t think that reducing P(extinction) is good for the welfare of all sentient beings, like most EAs. You care about x-risk reduction for other reasons and you have to be clear about that.
ii) you have to argue that it is not arbitrary to intrinsically value the existence of Earth-originated agents (or whatever non-welfarist and/or non-impartial thing) independently of what these agents do to moral patients. The reason why EAs like impartial welfarism is because anything else seems unfairly arbitrary, so if I have to give up on impartial welfarism in order to value reducing P(extinction), I solve a problem by creating a new one (and this solution to the problem would feel suspiciously convenient to me).
I think it still implicitly is a claim about net welfare across the cosmos. You have to believe that preserving option value will actually eventually lead to higher net welfare across the cosmos[1]---belief which I argue relies on judgment calls. (And the option-value argument for x-risk reduction was kind of already infamously known as a bad one in the GPR literature, including among x-risk reducers.)
Nice, that’s the crux! Yeah so I tentatively find something like bracketing out long-term effects more principled (as a paralysis breaker) than option-value preservation. I have no clue whether reducing the agony of the many animals we can robustly help in the near term is overall good when considering the indirect long-term effects, but I find doing it anyway far more justifiable than “reducing x-risks and let future people decide what they should do”. I would prefer the latter if I bought the premises of the option-value argument for x-risk reduction, but I wouldn’t be clueless and wouldn’t have a paralysis problem to begin with, then.
I don’t see any good reason to believe enabling our descendants is impartially better than doing the exact opposite (both positions rely on judgment calls that seem arbitrary to me). However, I see good (non-longtermist) reasons to reduce near-term animal suffering rather than increase it.
Unless you intrinsically value the existence of Earth-originated agents or something, and in a way where you’re happy to ignore the welfarist considerations that may leave you clueless on their own. In this case, you obviously think reducing P(extinction) is net positive. But then,
i) you don’t think that reducing P(extinction) is good for the welfare of all sentient beings, like most EAs. You care about x-risk reduction for other reasons and you have to be clear about that.
ii) you have to argue that it is not arbitrary to intrinsically value the existence of Earth-originated agents (or whatever non-welfarist and/or non-impartial thing) independently of what these agents do to moral patients. The reason why EAs like impartial welfarism is because anything else seems unfairly arbitrary, so if I have to give up on impartial welfarism in order to value reducing P(extinction), I solve a problem by creating a new one (and this solution to the problem would feel suspiciously convenient to me).
This is a good comment—thanks!