I donât recall the paper discussing the possibility that longtermist interventions could backfire for their intended effects
The paperâs main working example is just about any intelligent civilization existing, and doesnât get into what that civilization is doing or how valuable it is (which therefore includes things like not discussing whether itâs better or worse than extinction)
But Tarsney does acknowledge roughly that second point in one place:
Additionally, there are other potential sources of epistemic resistance to longtermism besides Weak Attractors that this paper has not addressed. In particular, these include:
Neutral Attractors To entertain small values of r [the rate of ENEs], we must assume that the state S targeted by a longtermist intervention, and its complement ÂŹS, are both at least to some extent âattractorâ states: Once a system is in state S, or state ÂŹS, it is unlikely to leave that state any time soon. But to justify significant values of ve and vs, it must also be the case that the attractors we are able to target differ significantly in expected value. And itâs not clear that we can assume this. For instance, perhaps âlarge interstellar civilization exists in spatial region Xâ is an attractor state, but âlarge interstellar civilization exists in region X with healthy norms and institutions that generate a high level of valueâ is not. If civilizations tend to âwanderâ unpredictably between high-value and low-value states, it could be that despite their astronomical potential for value, the expected value of large interstellar civilizations is close to zero. In that case, we can have persistent effects on the far future, but not effects that matter (in expectation).
He says âlow-valueâ rather than ânegative valueâ, but I assume he actually meant negative value, because random wandering between high and low positive values wouldnât produce an EV (for civilization existing rather than not existing) of close to 0.
In line with your comment:
I donât recall the paper discussing the possibility that longtermist interventions could backfire for their intended effects
The paperâs main working example is just about any intelligent civilization existing, and doesnât get into what that civilization is doing or how valuable it is (which therefore includes things like not discussing whether itâs better or worse than extinction)
But Tarsney does acknowledge roughly that second point in one place:
He says âlow-valueâ rather than ânegative valueâ, but I assume he actually meant negative value, because random wandering between high and low positive values wouldnât produce an EV (for civilization existing rather than not existing) of close to 0.