Sure, that definition is interesting—seems optimised for advancing arguments about how to do practical ethical reasoning. I think a variation it would follow from mine—an ex-ante very good decision is contained in a set of options whose ex ante effects on the very long-run future are very good.
Still, it would be good to have a definition that generalises to suboptimal agents. Suppose that what’s long-term optimal for me is to work twelve hours a day, but it’s vanishingly unlikely that I’ll do that. Then what can longtermism do for an agent like me? It’d also make sense for us to be able to use longtermism to evaluate the actions of politicians, even if we don’t think any of the actions are long- or short-term optimal.
You could just restrict the set of options, or make the option the intention to follow through with the action, which may fail (and backfire, e.g. burnout), so adjust your expectations keeping failure in mind.
Or attach some probability of actually doing each action, and hold that for any positive EV shorttermist option, there’s a much higher EV longtermist option which isn’t much less likely to be chosen (it could be the same one for each, but need not be).
Sure, that definition is interesting—seems optimised for advancing arguments about how to do practical ethical reasoning. I think a variation it would follow from mine—an ex-ante very good decision is contained in a set of options whose ex ante effects on the very long-run future are very good.
Still, it would be good to have a definition that generalises to suboptimal agents. Suppose that what’s long-term optimal for me is to work twelve hours a day, but it’s vanishingly unlikely that I’ll do that. Then what can longtermism do for an agent like me? It’d also make sense for us to be able to use longtermism to evaluate the actions of politicians, even if we don’t think any of the actions are long- or short-term optimal.
You could just restrict the set of options, or make the option the intention to follow through with the action, which may fail (and backfire, e.g. burnout), so adjust your expectations keeping failure in mind.
Or attach some probability of actually doing each action, and hold that for any positive EV shorttermist option, there’s a much higher EV longtermist option which isn’t much less likely to be chosen (it could be the same one for each, but need not be).