Do you think I should add an explicit caveat remarking that the reductio assumes on lyself-regarding reasons / preferences? For instance, I’m not in favor of cryonics for myself—I currently consider that, given the required investment plus all the uncertainties, I’m likely better off, from a moral point of view, by donating to effective charities (or even to another project I might value even after death, such as making my loved ones happy). But notice this has nothing to do with time preference (quite the opposite).
About Sarah’s example… Well, I agree with you; but notice that the reasoning in the Cryonics reductio is still valid—and that was my whole point. I’m not advocating for cryonics; I’m basically asking if one thinks that it’s a bad option because it aims at future experiences. I think someone could consistently bite this bullet. Actually, my whole point (which is still quite entalgled, I admit—and I thank your comment for exposing it) is that we often mix some types of reasoning connected to a subjective / contextual / (philosophically) relativistic notion of time (i.e., “Sarah in the present” vs. “Sarah in the future”) to some sort of (quasi-) objective / t-series notion (“Sarah in t”) - something like the “point of view of the universe” or “the point of view of humanity.” (Again, thanks to Gavin for directing my attention to this.) When we specifiy what point of view we are doing the evaluation from, most conundrums seem to disappear… except the next one.
I’m very interested in reading more about this:
The only reason people are even entertaining pure discounting is that they are worried about the paradoxes you get into if you end up having infinite total utility (yes, difficulties remain even if you just try and directly define a preference relation on possible worlds)
Of course, this is a real theoretical problem. However, I guess discounting because of uncertainty (and the possibility of extinction, etc.) might be enough to avoid it—as Nicholas Stern proposes. But I really get lost when we start talking about infinities.
I’m not sure I complete followed #1 but maybe this will answer what you are getting at.
I agree that the following argument is valid:
Either the time discounting rate is 0 or it is morally preferable to use your money/resources to produce utility now than to freeze yourself and produce utility later.
However, I still don’t think you can make the argument that I can’t think that time discounting is irrelevant to what I selfishly prefer while believing that you shouldn’t apply discounting when evaluating what is morally preferable. And I think this substantially reduces just how compelling the point is. I mean I do lots of things I’m aware are morally non-optimal. I probably should donate more of my earnings to EA causes etc.. etc.. but sometimes I choose to be selfish and when I consider cryonics it’s entirely as a selfish choice (I agree that even without discounting it’s a waste in utilitarian terms).
(Note that I’d make a distinction between saying something is morally favorable and that it is bad or blameworthy to do it but that’s getting a bit into the weeds).
—-
Regarding the theoretical problems I agree that they aren’t enough of a reason to accept a true discounting rate. Indeed, I’d go further and say that one is making a mistake to infer things about what’s morally good because we’d like our notion of morality to have certain nice properties. We don’t get to assume that morality is going to behave like we would like it to …we’ve just got to do our best with the means of inference we have.
I’m very grateful for your comment.
Do you think I should add an explicit caveat remarking that the reductio assumes on lyself-regarding reasons / preferences?
For instance, I’m not in favor of cryonics for myself—I currently consider that, given the required investment plus all the uncertainties, I’m likely better off, from a moral point of view, by donating to effective charities (or even to another project I might value even after death, such as making my loved ones happy). But notice this has nothing to do with time preference (quite the opposite).
About Sarah’s example… Well, I agree with you; but notice that the reasoning in the Cryonics reductio is still valid—and that was my whole point. I’m not advocating for cryonics; I’m basically asking if one thinks that it’s a bad option because it aims at future experiences. I think someone could consistently bite this bullet. Actually, my whole point (which is still quite entalgled, I admit—and I thank your comment for exposing it) is that we often mix some types of reasoning connected to a subjective / contextual / (philosophically) relativistic notion of time (i.e., “Sarah in the present” vs. “Sarah in the future”) to some sort of (quasi-) objective / t-series notion (“Sarah in t”) - something like the “point of view of the universe” or “the point of view of humanity.” (Again, thanks to Gavin for directing my attention to this.) When we specifiy what point of view we are doing the evaluation from, most conundrums seem to disappear… except the next one.
I’m very interested in reading more about this:
Of course, this is a real theoretical problem. However, I guess discounting because of uncertainty (and the possibility of extinction, etc.) might be enough to avoid it—as Nicholas Stern proposes. But I really get lost when we start talking about infinities.
I’m not sure I complete followed #1 but maybe this will answer what you are getting at.
I agree that the following argument is valid:
Either the time discounting rate is 0 or it is morally preferable to use your money/resources to produce utility now than to freeze yourself and produce utility later.
However, I still don’t think you can make the argument that I can’t think that time discounting is irrelevant to what I selfishly prefer while believing that you shouldn’t apply discounting when evaluating what is morally preferable. And I think this substantially reduces just how compelling the point is. I mean I do lots of things I’m aware are morally non-optimal. I probably should donate more of my earnings to EA causes etc.. etc.. but sometimes I choose to be selfish and when I consider cryonics it’s entirely as a selfish choice (I agree that even without discounting it’s a waste in utilitarian terms).
(Note that I’d make a distinction between saying something is morally favorable and that it is bad or blameworthy to do it but that’s getting a bit into the weeds).
—-
Regarding the theoretical problems I agree that they aren’t enough of a reason to accept a true discounting rate. Indeed, I’d go further and say that one is making a mistake to infer things about what’s morally good because we’d like our notion of morality to have certain nice properties. We don’t get to assume that morality is going to behave like we would like it to …we’ve just got to do our best with the means of inference we have.