in a thread there i mentioned that even for a described ‘ultimate neartermist’, the best action is actually to cause acausal trade (i.e. by causing aligned ASI) with an ASI at an earlier point in time. for a hypothetical value which only cares about near-term beings, this would also be true, because most near-term beings are not on earth.
also, if i consider a hypothetical value which just cares about near-term beings on earth, it may prefer to destroy earth instead of slowly reducing animal suffering. ‘would want to destroy earth’ is a classical response to the idea of pure negative utilitarianism, but it would apply to standard utilitarianism too if the things valued (in this hypothetical case, just near-term beings on earth) experienced more bad than good which could not be mitigated enough in the near-term.
(disclaimer: the ‘neartermism’ of actual humans is probably importantly different to these, probably more reliant on moral intuition than these literal interpretations. i’m a longtermist myself.)
this comes to mind
https://forum.effectivealtruism.org/posts/jGoExJpGgLnsNPKD8/does-ultimate-neartermism-via-eternal-inflation-dominate?commentId=oQf9RDLqjkLjobmne
in a thread there i mentioned that even for a described ‘ultimate neartermist’, the best action is actually to cause acausal trade (i.e. by causing aligned ASI) with an ASI at an earlier point in time. for a hypothetical value which only cares about near-term beings, this would also be true, because most near-term beings are not on earth.
also, if i consider a hypothetical value which just cares about near-term beings on earth, it may prefer to destroy earth instead of slowly reducing animal suffering. ‘would want to destroy earth’ is a classical response to the idea of pure negative utilitarianism, but it would apply to standard utilitarianism too if the things valued (in this hypothetical case, just near-term beings on earth) experienced more bad than good which could not be mitigated enough in the near-term.
(disclaimer: the ‘neartermism’ of actual humans is probably importantly different to these, probably more reliant on moral intuition than these literal interpretations. i’m a longtermist myself.)