Are there any developed versions of long termism that are based off deontology, contractarianism and virtue ethics.
If so, how do they looked different. It seems very significant if all major schools of normative ethical theories converge
[Question] Non-consequentialist longtermism
Not a full theory, but Frick argues that humanity may have value in itself that’s worth preserving: https://scholar.princeton.edu/jfrick/publications/survival-humanity
Magnus Vinding defends suffering-focused ethics based on various non-consequentialist views in sections 6.6-6.12 of his book, Suffering-Focused Ethics: Defense and Implications, and argues for reducing s-risks in the same book as a consequence of suffering-focused ethics. I don’t think he argues directly for reducing s-risks specifically based on these views (rather than generally reducing suffering), though, and I’m not sure these other views would recommend reducing s-risks over other ways to prevent suffering; it would depend on the specifics.
Maybe a deontological antinatalist ethics? Some may be interested in particular in (voluntary) human extinction, which would probably have very long term effects. Bringing someone into existence may be seen as a serious harm, exploitation or at least being reckless with the life of another, and so impermissible. However, the reasons to convince others to stop having kids may be essentially consequentialist, unless you have positive duties to others.
A proposal I’ve heard in contractualist and deontological theories is that to choose between two actions, you should prioritize the individual(s) with the strongest claim or who would be harmed the most (not necessarily the worst off, to contrast with Rawls’ Difference Principle/maximin). This is the “Greater Burden Principle” by the contractualist Scanlon. Tom Regan, the deontologist animal rights theorist, also endorsed it, as the “harm principle”.
This principle might lend itself to longtermist thinking, but I’m not sure anyone has made a serious attempt to advocate for longtermism under such a view.
You might think that, unless you promote extinction, there is more likely to be someone in the distant future who will be harmed far more than anyone in the short-term future would be by promoting extinction than doing anything else, due to the huge number of chances for things to go very badly for an individual with a huge number of individuals in the future, or intentional optimization for suffering with advanced technology. Although I think contractualist and deontological views generally take additional people to be at best neutral in themselves, if you allowed for extra lives to be good in themselves, the individuals who would be harmed the most between extinction and non-extinction may be individuals in the distant future who would have lives with more value than any life so far, and not ensuring they exist may cause the greatest individual harm.
Furthermore, it has been argued that, according to contractualism, helping more people is better than helping fewer, when the individual harms are of the same magnitude, e.g. based on a tie-break argument or a veil of ignorance. See Suikkanen for some discussion.
There have also been recent attempts to adapt the Greater Burden Principle for cases with risk/uncertainty, since that has apparently been a problem. See Frick, for example. I think the handling of risk could be important for whether or not a theory endorses longtermism.
I guess Samuel Scheffler’s last book has a little bit of them all (I haven’t read it yet). And Korsgaard makes a persuasive Kantian case about the disvalue of human extinction.
Thank you, those both look like exactly what I’m looking for
You’re welcome. Plz, write a post (even if a shortform) about it someday.
Something that attracts me in this literature (particularly in Scheffler) is how they can pick different intuitions that often collide with premises / conclusions of reasons based on something like the rational agent model (i.e., VnM decision theory). I think that, even for a philosophical theorist, it could be useful to know more about how prevalent are these intuitions, and what possible (social or psychological) explanations could be offered for them. (I admit that, just like the modus ponens of one philosopher might be the modus tollens of the other, someone’s intuition might be someone else’s cognitive bias)
For instance, Scheffler mentions we (at least me and him) have a “primitive” preference for humanity’s existence (I think by “humanity” he usually means rational agents similar to us—being extinct by trisolarans would be bad, but not as bad as the end of all conscious rational agents); we usually prefer that humanity exists for a long time, rather than a short period, even if both timelines have the same amount of utility—which seems to imply some sort of negative discount rate of the future, so violating usual “pure time preference” reasoning. Besides, we prefer world histories where there’s a causal connection between generations / individuals, instead of possible worlds with the same amount of utility (and the same length in time) where communities spring and get extinct without any relation between them—I admit this sounds weird, but I think it might explain my malaise towards discussions on infinite ethics.
From SEP:
I’m not sure it follows that a contractualist should focus on present needs, though, since I think some contractualists would accept the procreation asymmetry, and so preventing futures with very bad lives could be important.
Rawls was a contractualist and argued for saving for future generations (assuming they will exist) based on the veil of ignorance; see 4.5 Rawls’s Just Savings Principle in the SEP article Intergenerational Justice:
Still, this seems to me to be a basically consequentialist argument, since, from my understanding, Rawls’ treatment of the original position behind veil of ignorance is basically consequentialist.
The article also discusses rights-based approaches and other reasons to care for future generations.
Apparently contractualists are basically Kantian deontologists, though. On the other hand, contractarianism attempts to motivate ethical behaviour through rational self-interest without assuming concern for acting morally or taking the interests of others into account. See the SEP article on contractarianism, which contrasts the two in its introduction and in a few other places in the article.
The paper “The Case for Strong Longtermism”, by Hilary Greaves and William MacAskill, goes into deontic strong longtermism in section 6. Hope this is useful.
Not really because that paper is essentially just making the consequentialist claim that axiological long termism implies that the action we should take are those which help the long run future the most. The Good is still prior to the Right.
But thank you for replying, in hindsight by reply seems a bit dismissive :)
I’ve haven’t read it, but the name of this paper from Andreas at GPI at least fits what you’re asking—“Staking our future: deontic long-termism and the non-identity problem”
Hi Alex, the link isn’t working
https://globalprioritiesinstitute.org/andreas-mogensen-staking-our-future-deontic-long-termism-and-the-non-identity-problem/