tl;dr: The paper ignores 2 factors that could strengthen the case for longtermismānamely, possible increases in how efficiently resources are used and in what extremes of experiences can be reached.
Tarsney writes:
The case for longtermism starts from the observation that the far future is very big. A bit more precisely, the far future of human-originating civilization holds vastly greater potential for value and disvalue than the near future. This is true for two reasons. The first is duration. On any natural way of drawing the boundary between the near and far futures (e.g., 1000 or 1 million years from the present), it is possible that our civilization will persist for a period orders of magnitude longer than the near future. For instance, even on the extremely conservative assumption that our civilization must die out when the increasing energy output of the Sun makes Earth too hot for complex life as we know it, we could still survive some 500 million years. Second is spatial extent and resource utilization. If our descendants eventually undertake a program of interstellar settlement, even at a small fraction of the speed of light, they could eventually settle a region of the Universe and utilize a pool of resources vastly greater than we can access today. Both these factors suggest that the far future has enormous potential for value or disvalue.
I essentially agree with all those points. Furthermore, given my current moral and empirical views, I think those factors are probably the main factors driving the case for longtermism.
But I think there are at least two other factors that are relevant and that might substantially add to the case for longtermism. (Though itās possible that they add so little relative to the other factors that they wonāt really be decision-relevant.)
---
The first factor is possible increases in efficiency of resource usage. For a given quantity and type of matter or energy, future civilizations may be able to more efficiently convert that into moral value or disvalue than current civilization can. For example, if we can create simulated humans or animals (or artificial sentiences) that are morally relevant, these may be able to experience the same pleasures or pains we can with substantially less energy required.
Thus, the factor by which total quantity of moral (dis)value in the long-term future is expected to be larger than that in the present + near-term future may be even larger than one would think if one considered only the duration, spatial extent, and resources used in the future.
(Tarsneyās term āresource utilizationā might seem like it should capture this idea, but his description suggests that he has in mind only changes in how much resources we use, not changes in how efficiently we use them.)
---
The second factor is possible increases in the extremes of experience that can be reached. It seems plausible that future civilizations will be able to create experiences more extremely good or bad than experiences that we can create today or that are experienced in nature. If so, this might increase the importance of the long-term future, if either of the following things are true:
Those experiences can be created relatively efficiently (e.g., just slightly less efficiently than substantially less extreme experiences)
There is some moral reason why the extreme experiences matter disproportionately more than other experiences (i.e., if the moral significance of an experience increases superlinearly with the extremity of the experience, at least at some points of that āfunctionā)
Iād guess that this factor is much less important than the efficiency factor, but it seems very hard to say.
The same basic point might also apply to non-experience things that might be morally good or bad. (E.g., if art has intrinsic moral value, perhaps future civilization could create art that is more extremely good than current art.)
---
Iāve seen roughly those ideas idea discussed in various places before, though I canāt remember precisely where. The concept of hedonium can be seen as a special case of the efficiency factor.
Chapter 8 of The Precipice, on āOur Potentialā, is also relevant here. Ord splits that chapter into discussion of the futureās potential duration, its potential scale, and its potential quality. I imagine that the points I raised above were covered in that chapter, but I canāt remember for sure (I read the book a year ago, and foolishly enough I had not yet converted to using Anki as I read).
---
I think itād be interesting for someone to think about how Tarsneyās models or parameter estimates could be tweaked to account for these factors, and maybe to see how much difference this makes (after plugging in some reasonable-seeming distributions for the parameters).
I think these would basically be just constant factors multiplying the whole impacts, assuming we remain near the peaks for far longer than we spend making significant moves towards the peaks.
The difference between intentionally optimizing for hedonistic welfare and a default with human-like minds could itself be on the scale of an existential catastrophe for a classical utilitarian, and more important than extinction, although it could also be far less tractable and not really an attractor state at all if itās not stable/āpersistent. This could also generalize to other theories of welfare, just with different targets.
tl;dr: The paper ignores 2 factors that could strengthen the case for longtermismānamely, possible increases in how efficiently resources are used and in what extremes of experiences can be reached.
Tarsney writes:
I essentially agree with all those points. Furthermore, given my current moral and empirical views, I think those factors are probably the main factors driving the case for longtermism.
But I think there are at least two other factors that are relevant and that might substantially add to the case for longtermism. (Though itās possible that they add so little relative to the other factors that they wonāt really be decision-relevant.)
---
The first factor is possible increases in efficiency of resource usage. For a given quantity and type of matter or energy, future civilizations may be able to more efficiently convert that into moral value or disvalue than current civilization can. For example, if we can create simulated humans or animals (or artificial sentiences) that are morally relevant, these may be able to experience the same pleasures or pains we can with substantially less energy required.
Thus, the factor by which total quantity of moral (dis)value in the long-term future is expected to be larger than that in the present + near-term future may be even larger than one would think if one considered only the duration, spatial extent, and resources used in the future.
(Tarsneyās term āresource utilizationā might seem like it should capture this idea, but his description suggests that he has in mind only changes in how much resources we use, not changes in how efficiently we use them.)
---
The second factor is possible increases in the extremes of experience that can be reached. It seems plausible that future civilizations will be able to create experiences more extremely good or bad than experiences that we can create today or that are experienced in nature. If so, this might increase the importance of the long-term future, if either of the following things are true:
Those experiences can be created relatively efficiently (e.g., just slightly less efficiently than substantially less extreme experiences)
There is some moral reason why the extreme experiences matter disproportionately more than other experiences (i.e., if the moral significance of an experience increases superlinearly with the extremity of the experience, at least at some points of that āfunctionā)
Iād guess that this factor is much less important than the efficiency factor, but it seems very hard to say.
The same basic point might also apply to non-experience things that might be morally good or bad. (E.g., if art has intrinsic moral value, perhaps future civilization could create art that is more extremely good than current art.)
---
Iāve seen roughly those ideas idea discussed in various places before, though I canāt remember precisely where. The concept of hedonium can be seen as a special case of the efficiency factor.
Chapter 8 of The Precipice, on āOur Potentialā, is also relevant here. Ord splits that chapter into discussion of the futureās potential duration, its potential scale, and its potential quality. I imagine that the points I raised above were covered in that chapter, but I canāt remember for sure (I read the book a year ago, and foolishly enough I had not yet converted to using Anki as I read).
---
I think itād be interesting for someone to think about how Tarsneyās models or parameter estimates could be tweaked to account for these factors, and maybe to see how much difference this makes (after plugging in some reasonable-seeming distributions for the parameters).
I think these would basically be just constant factors multiplying the whole impacts, assuming we remain near the peaks for far longer than we spend making significant moves towards the peaks.
The difference between intentionally optimizing for hedonistic welfare and a default with human-like minds could itself be on the scale of an existential catastrophe for a classical utilitarian, and more important than extinction, although it could also be far less tractable and not really an attractor state at all if itās not stable/āpersistent. This could also generalize to other theories of welfare, just with different targets.