(Note: Iâve made several important additions to this comment within the first ~30 minutes of posting it, plus some more minor edits after.)
I think this is an important point, so Iâve given you a strong upvote. Still, I think total utilitarians arenât rationally required to endorse EV maximization or longtermism, even approximately except under certain other assumptions.
Tarsney has also written that stochastic dominance doesnât lead to EV maximization or longtermism under total utilitarianism, if the probabilities (probability differences) are low enough, and has said itâs plausible the probabilities are in fact that low (not that he said itâs his best guess theyâre that low). See âThe epistemic challenge to longtermismâ, and especially footnote 41.
Itâs also not clear to me that we shouldnât just ignore background noise thatâs unaffected by our actions or generally balance other concerns against stochastic dominance, like risk aversion or ambiguity aversion, particularly with respect to the difference one makes, as discussed in âThe case for strong longtermismâ by Greaves and MacAskill in section 7.5. Greaves and MacAskill do argue that ambiguity aversion with respect to the outcomes doesnât point against existential risk reduction, and if I recall correctly from following citations, that ambiguity aversion with respect to the difference one makes is too agent-relative.
On the other hand, using your own precise subjective probabilities to define rational requirement seems pretty agent-relative to me, too. Surely, if the correct ethics is fully agent-neutral, you should be required to do what actually maximizes value among available options, regardless of your own particular beliefs about whatâs best. Or, at least, precise subjective probabilities seem hard to defend as agent-neutral, when different rational agents could have different beliefs even with access to the same information, due to different priors or because they weigh evidence differently.
Plus, without separability (ignoring whatâs unaffected) in the first place, the case for utilitarianism itself seems much weaker, since the representation theorems that imply utilitarianism, like Harsanyiâs (and generalization here) and the deterministic ones like the one here, require separability or something similar.
(Note: Iâve made several important additions to this comment within the first ~30 minutes of posting it, plus some more minor edits after.)
I think this is an important point, so Iâve given you a strong upvote. Still, I think total utilitarians arenât rationally required to endorse EV maximization or longtermism, even approximately except under certain other assumptions.
Tarsney has also written that stochastic dominance doesnât lead to EV maximization or longtermism under total utilitarianism, if the probabilities (probability differences) are low enough, and has said itâs plausible the probabilities are in fact that low (not that he said itâs his best guess theyâre that low). See âThe epistemic challenge to longtermismâ, and especially footnote 41.
Itâs also not clear to me that we shouldnât just ignore background noise thatâs unaffected by our actions or generally balance other concerns against stochastic dominance, like risk aversion or ambiguity aversion, particularly with respect to the difference one makes, as discussed in âThe case for strong longtermismâ by Greaves and MacAskill in section 7.5. Greaves and MacAskill do argue that ambiguity aversion with respect to the outcomes doesnât point against existential risk reduction, and if I recall correctly from following citations, that ambiguity aversion with respect to the difference one makes is too agent-relative.
On the other hand, using your own precise subjective probabilities to define rational requirement seems pretty agent-relative to me, too. Surely, if the correct ethics is fully agent-neutral, you should be required to do what actually maximizes value among available options, regardless of your own particular beliefs about whatâs best. Or, at least, precise subjective probabilities seem hard to defend as agent-neutral, when different rational agents could have different beliefs even with access to the same information, due to different priors or because they weigh evidence differently.
Plus, without separability (ignoring whatâs unaffected) in the first place, the case for utilitarianism itself seems much weaker, since the representation theorems that imply utilitarianism, like Harsanyiâs (and generalization here) and the deterministic ones like the one here, require separability or something similar.