I like the simpler/more general model, although I think you should also take expectations, and allow for multiple joint probability distributions for the outcomes of a single action to reflect our deep uncertainty (and there’s also moral uncertainty, but I would deal with that separately on top of this). That almost all of the (change in) value happens in the longterm future isn’t helpful to know if we can’t predict which direction it goes.
Let strong longtermism be the thesis that in a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.
So this doesn’t say that most of the value of any given action is in the tail of the rewards; perhaps you can find some actions with negligible ex ante longterm consequences, e.g. examples of simple cluelessness.
Sure, that definition is interesting—seems optimised for advancing arguments about how to do practical ethical reasoning. I think a variation it would follow from mine—an ex-ante very good decision is contained in a set of options whose ex ante effects on the very long-run future are very good.
Still, it would be good to have a definition that generalises to suboptimal agents. Suppose that what’s long-term optimal for me is to work twelve hours a day, but it’s vanishingly unlikely that I’ll do that. Then what can longtermism do for an agent like me? It’d also make sense for us to be able to use longtermism to evaluate the actions of politicians, even if we don’t think any of the actions are long- or short-term optimal.
You could just restrict the set of options, or make the option the intention to follow through with the action, which may fail (and backfire, e.g. burnout), so adjust your expectations keeping failure in mind.
Or attach some probability of actually doing each action, and hold that for any positive EV shorttermist option, there’s a much higher EV longtermist option which isn’t much less likely to be chosen (it could be the same one for each, but need not be).
I like the simpler/more general model, although I think you should also take expectations, and allow for multiple joint probability distributions for the outcomes of a single action to reflect our deep uncertainty (and there’s also moral uncertainty, but I would deal with that separately on top of this). That almost all of the (change in) value happens in the longterm future isn’t helpful to know if we can’t predict which direction it goes.
Greaves and MacAskill define strong longtermism this way:
So this doesn’t say that most of the value of any given action is in the tail of the rewards; perhaps you can find some actions with negligible ex ante longterm consequences, e.g. examples of simple cluelessness.
Sure, that definition is interesting—seems optimised for advancing arguments about how to do practical ethical reasoning. I think a variation it would follow from mine—an ex-ante very good decision is contained in a set of options whose ex ante effects on the very long-run future are very good.
Still, it would be good to have a definition that generalises to suboptimal agents. Suppose that what’s long-term optimal for me is to work twelve hours a day, but it’s vanishingly unlikely that I’ll do that. Then what can longtermism do for an agent like me? It’d also make sense for us to be able to use longtermism to evaluate the actions of politicians, even if we don’t think any of the actions are long- or short-term optimal.
You could just restrict the set of options, or make the option the intention to follow through with the action, which may fail (and backfire, e.g. burnout), so adjust your expectations keeping failure in mind.
Or attach some probability of actually doing each action, and hold that for any positive EV shorttermist option, there’s a much higher EV longtermist option which isn’t much less likely to be chosen (it could be the same one for each, but need not be).