Maybe a deontological antinatalist ethics? Some may be interested in particular in (voluntary) human extinction, which would probably have very long term effects. Bringing someone into existence may be seen as a serious harm, exploitation or at least being reckless with the life of another, and so impermissible. However, the reasons to convince others to stop having kids may be essentially consequentialist, unless you have positive duties to others.
A proposal I’ve heard in contractualist and deontological theories is that to choose between two actions, you should prioritize the individual(s) with the strongest claim or who would be harmed the most (not necessarily the worst off, to contrast with Rawls’ Difference Principle/maximin). This is the “Greater Burden Principle” by the contractualist Scanlon. Tom Regan, the deontologist animal rights theorist, also endorsed it, as the “harm principle”.
This principle might lend itself to longtermist thinking, but I’m not sure anyone has made a serious attempt to advocate for longtermism under such a view.
You might think that, unless you promote extinction, there is more likely to be someone in the distant future who will be harmed far more than anyone in the short-term future would be by promoting extinction than doing anything else, due to the huge number of chances for things to go very badly for an individual with a huge number of individuals in the future, or intentional optimization for suffering with advanced technology. Although I think contractualist and deontological views generally take additional people to be at best neutral in themselves, if you allowed for extra lives to be good in themselves, the individuals who would be harmed the most between extinction and non-extinction may be individuals in the distant future who would have lives with more value than any life so far, and not ensuring they exist may cause the greatest individual harm.
Furthermore, it has been argued that, according to contractualism, helping more people is better than helping fewer, when the individual harms are of the same magnitude, e.g. based on a tie-break argument or a veil of ignorance. See Suikkanen for some discussion.
There have also been recent attempts to adapt the Greater Burden Principle for cases with risk/uncertainty, since that has apparently been a problem. See Frick, for example. I think the handling of risk could be important for whether or not a theory endorses longtermism.
Maybe a deontological antinatalist ethics? Some may be interested in particular in (voluntary) human extinction, which would probably have very long term effects. Bringing someone into existence may be seen as a serious harm, exploitation or at least being reckless with the life of another, and so impermissible. However, the reasons to convince others to stop having kids may be essentially consequentialist, unless you have positive duties to others.
A proposal I’ve heard in contractualist and deontological theories is that to choose between two actions, you should prioritize the individual(s) with the strongest claim or who would be harmed the most (not necessarily the worst off, to contrast with Rawls’ Difference Principle/maximin). This is the “Greater Burden Principle” by the contractualist Scanlon. Tom Regan, the deontologist animal rights theorist, also endorsed it, as the “harm principle”.
This principle might lend itself to longtermist thinking, but I’m not sure anyone has made a serious attempt to advocate for longtermism under such a view.
You might think that, unless you promote extinction, there is more likely to be someone in the distant future who will be harmed far more than anyone in the short-term future would be by promoting extinction than doing anything else, due to the huge number of chances for things to go very badly for an individual with a huge number of individuals in the future, or intentional optimization for suffering with advanced technology. Although I think contractualist and deontological views generally take additional people to be at best neutral in themselves, if you allowed for extra lives to be good in themselves, the individuals who would be harmed the most between extinction and non-extinction may be individuals in the distant future who would have lives with more value than any life so far, and not ensuring they exist may cause the greatest individual harm.
Furthermore, it has been argued that, according to contractualism, helping more people is better than helping fewer, when the individual harms are of the same magnitude, e.g. based on a tie-break argument or a veil of ignorance. See Suikkanen for some discussion.
There have also been recent attempts to adapt the Greater Burden Principle for cases with risk/uncertainty, since that has apparently been a problem. See Frick, for example. I think the handling of risk could be important for whether or not a theory endorses longtermism.