Having read relatively little of it, my understanding is that the point of the academic literature (which do not usually assume total utilitarian views?) on longtermism is to show that longtermism is compatible (in some cases required) under a broad scope of moral views that are considered respectable within the academic literature.
So they donāt talk about science-fictiony stuff, since their claim is that longtermism is robustly true (or compatible) with reasonable academic views in moral philosophy.
This is also my impression of some of Toby Ordās work in The Precipice (particularly chapter 2) and some of the work of GPI, at least. Iām not sure how much it applies more widely to academic work thatās explicitly on longtermism, as I havenāt read a great deal of it yet.
On the other hand, many of Bostromās seminal works on existential risks very explicitly refer to such āscience-fictionyā scenarios.. And these works effectively seem like seminal works for longtermism too, even if they didnāt yet use that term. E.g., Bostrom writes:
Another estimate, which assumes that future minds will be mainly implemented in computational hardware instead of biological neuronal wetware, produces a lower bound of 10^54 human-brain-emulation subjective life-years (or 10^71 basic computational operations) (Bostrom 2003). If we make the less conservative assumption that future civilizations could eventually press close to the absolute bounds of known physics (using some as yet unimagined technology), we get radically higher estimates of the amount of computation and memory storage that is achievable and thus of the number of years of subjective experience that could be realized.
Indeed, in the same paper, he even suggests that not ending up in such scenarios could count as an existential catastrophe in itself:
Permanent stagnation is instantiated if humanity survives but never reaches technological maturity ā that is, the attainment of capabilities affording a level of economic productivity and control over nature that is close to the maximum that could feasibly be achieved (in the fullness of time and in the absence of catastrophic defeaters). For instance, a technologically mature civilization could (presumably) engage in large-scale space colonization through the use of automated self-replicating āvon Neumann probes.ā (Freitas 1980; Moravec 1988; Tipler 1980) It would also be able to modify and enhance human biology ā say, through the use of advanced biotechnology or molecular nanotechnology (Freitas 1999 and 2003). Further, it could construct extremely powerful computational hardware and use it to create whole-brain emulations and entirely artificial types of sentient, superintelligent minds (Sandberg and Bostrom 2008). It might have many additional capabilities, some of which may not be fully imaginable from our current vantage point.
This is also relevant to some other claims of Abrahamās in the post or comments, such as āit seems worth noting that much the literature on longtermism, outside Foundation Research Institute, isnāt making claims explicitly about digital minds as the primary holders of future welfare, but just focuses on the future organic human populations (Greaves and MacAskillās paper, for example), and similar sized populations to the present day human population at that.ā I think this may well be true for the academic literature thatās explicitly about ālongtermismā, but Iām less confident itās true for the wider literature on ālongtermismā, or the academic literature that seems effectively longtermist.
It also seems worth noting that, to the extent that a desire to appear respectable/āconservative explains why academic work on longtermism shies away from discussing things like digital minds, it may also explain why such literature makes relatively little mention of nonhuman animals. I think a substantial concern for the suffering of wild animals would be seen as similarly āwackyā to many audiences, perhaps even more so than a belief that most āhumansā in the future may be digital minds. So it may not be the case that, ābehind closed doorsā, people from e.g. GPI wouldnāt think about the relevance of animals to far future stuff.
(Personally, Iād prefer it if people could just state all such beliefs pretty openly, but I can understand strategic reasons to refrain from doing so in some settings, unfortunately.)
Also, interestingly, Bostrom does appear to note wild animal suffering in the same paper (though only in one footnote):
We might also have responsibilities to nonhuman beings, such as terrestrial (and possible extraterrestrial) animals. Although we are not currently doing much to help them, we have the opportunity to do so in the future. If rendering aid to suffering nonhuman animals in the natural environment is an important value, then achieving technological maturity in a manner that fails to produce such aid could count as flawed realization. Cf. McMahan 2010; Pearce 2004.
Thanks for this. I think for me the major lessons from comments /ā conversations here is that many longtermists have much stronger beliefs in the possibility of future digital minds than I thought, and I definitely see how that belief could lead one to think that future digital minds are of overwhelming importance. However, I do think that for utilitarian longtermists, animal considerations might dominate in possible futures where digital minds donāt happen or spread massively, so to some extent oneās credence in my argument /ā concern for future animals ought to be defined by how much you believe in or disbelieve in the possibility and importance of future digital minds.
As someone who is not particularly familiar with longtermist literature, outside a pretty light review done for this piece, and a general sense of this topic from having spent time in the EA community, Iād say I did not really have the impression that the longtermist community was concerned with future digital minds (outside EA Foundation, etc). Though that just may have been bad luck.
This is also my impression of some of Toby Ordās work in The Precipice (particularly chapter 2) and some of the work of GPI, at least. Iām not sure how much it applies more widely to academic work thatās explicitly on longtermism, as I havenāt read a great deal of it yet.
On the other hand, many of Bostromās seminal works on existential risks very explicitly refer to such āscience-fictionyā scenarios.. And these works effectively seem like seminal works for longtermism too, even if they didnāt yet use that term. E.g., Bostrom writes:
Indeed, in the same paper, he even suggests that not ending up in such scenarios could count as an existential catastrophe in itself:
This is also relevant to some other claims of Abrahamās in the post or comments, such as āit seems worth noting that much the literature on longtermism, outside Foundation Research Institute, isnāt making claims explicitly about digital minds as the primary holders of future welfare, but just focuses on the future organic human populations (Greaves and MacAskillās paper, for example), and similar sized populations to the present day human population at that.ā I think this may well be true for the academic literature thatās explicitly about ālongtermismā, but Iām less confident itās true for the wider literature on ālongtermismā, or the academic literature that seems effectively longtermist.
It also seems worth noting that, to the extent that a desire to appear respectable/āconservative explains why academic work on longtermism shies away from discussing things like digital minds, it may also explain why such literature makes relatively little mention of nonhuman animals. I think a substantial concern for the suffering of wild animals would be seen as similarly āwackyā to many audiences, perhaps even more so than a belief that most āhumansā in the future may be digital minds. So it may not be the case that, ābehind closed doorsā, people from e.g. GPI wouldnāt think about the relevance of animals to far future stuff.
(Personally, Iād prefer it if people could just state all such beliefs pretty openly, but I can understand strategic reasons to refrain from doing so in some settings, unfortunately.)
Also, interestingly, Bostrom does appear to note wild animal suffering in the same paper (though only in one footnote):
Thanks for this. I think for me the major lessons from comments /ā conversations here is that many longtermists have much stronger beliefs in the possibility of future digital minds than I thought, and I definitely see how that belief could lead one to think that future digital minds are of overwhelming importance. However, I do think that for utilitarian longtermists, animal considerations might dominate in possible futures where digital minds donāt happen or spread massively, so to some extent oneās credence in my argument /ā concern for future animals ought to be defined by how much you believe in or disbelieve in the possibility and importance of future digital minds.
As someone who is not particularly familiar with longtermist literature, outside a pretty light review done for this piece, and a general sense of this topic from having spent time in the EA community, Iād say I did not really have the impression that the longtermist community was concerned with future digital minds (outside EA Foundation, etc). Though that just may have been bad luck.