It’s so easy to collapse into the arms of “if there’s even a small chance X will make a very good future more likely …” As with consequentialism, I totally buy the logic of this! The issue is that it’s incredibly easy to hide motivated reasoning in this framework. Figuring out what’s best to do is really hard, and this line of thinking conveniently ends the inquiry (for people who want that).
I have seen something like this happen, so I’m not claiming it doesn’t, but it feels pretty confusing to me. The logic pretty clearly doesn’t hold up. Even if you accept that “very good future” is all that matters, you still need to optimize for the action that most increases the probability of a very good future, and that’s still a hard question, and you can’t just end the inquiry with this line of thinking.
Yeah I’m surprised by this as well. Both classical utilitarianism (in the extreme version, “everything that is not morally obligatory is forbidden”) and longtermism just seem to have many lower degrees of freedom than other commonly espoused ethical systems, so it would naively be surprising if these worldviews can justify a broader range of actions than close alternatives.
I have seen something like this happen, so I’m not claiming it doesn’t, but it feels pretty confusing to me. The logic pretty clearly doesn’t hold up. Even if you accept that “very good future” is all that matters, you still need to optimize for the action that most increases the probability of a very good future, and that’s still a hard question, and you can’t just end the inquiry with this line of thinking.
Yeah I’m surprised by this as well. Both classical utilitarianism (in the extreme version, “everything that is not morally obligatory is forbidden”) and longtermism just seem to have many lower degrees of freedom than other commonly espoused ethical systems, so it would naively be surprising if these worldviews can justify a broader range of actions than close alternatives.