I don’t recall the paper discussing the possibility that longtermist interventions could backfire for their intended effects
The paper’s main working example is just about any intelligent civilization existing, and doesn’t get into what that civilization is doing or how valuable it is (which therefore includes things like not discussing whether it’s better or worse than extinction)
But Tarsney does acknowledge roughly that second point in one place:
Additionally, there are other potential sources of epistemic resistance to longtermism besides Weak Attractors that this paper has not addressed. In particular, these include:
Neutral Attractors To entertain small values of r [the rate of ENEs], we must assume that the state S targeted by a longtermist intervention, and its complement ¬S, are both at least to some extent “attractor” states: Once a system is in state S, or state ¬S, it is unlikely to leave that state any time soon. But to justify significant values of ve and vs, it must also be the case that the attractors we are able to target differ significantly in expected value. And it’s not clear that we can assume this. For instance, perhaps “large interstellar civilization exists in spatial region X” is an attractor state, but “large interstellar civilization exists in region X with healthy norms and institutions that generate a high level of value” is not. If civilizations tend to “wander” unpredictably between high-value and low-value states, it could be that despite their astronomical potential for value, the expected value of large interstellar civilizations is close to zero. In that case, we can have persistent effects on the far future, but not effects that matter (in expectation).
He says “low-value” rather than “negative value”, but I assume he actually meant negative value, because random wandering between high and low positive values wouldn’t produce an EV (for civilization existing rather than not existing) of close to 0.
In line with your comment:
I don’t recall the paper discussing the possibility that longtermist interventions could backfire for their intended effects
The paper’s main working example is just about any intelligent civilization existing, and doesn’t get into what that civilization is doing or how valuable it is (which therefore includes things like not discussing whether it’s better or worse than extinction)
But Tarsney does acknowledge roughly that second point in one place:
He says “low-value” rather than “negative value”, but I assume he actually meant negative value, because random wandering between high and low positive values wouldn’t produce an EV (for civilization existing rather than not existing) of close to 0.