This is crazy, and I think it makes a lot more sense to just admit that part of you cares about galaxies and part of you cares about ice cream and say that neither of these parts are going to be suppressed and beaten down inside you.
Have you read Is the potential astronomical waste in our universe too small to care about? which asks the question, should these two parts of you make a (mutually beneficial) deal/bet while being uncertain of the size of (the reachable part of) the universe, such that the part of you that cares about galaxies gets more votes in a bigger universe, and vice versa? I have not been able to find a philosophically satisfactory answer to this question.
If you do, then one or the other part of you will end up with almost all of the votes when you find out for sure the actual size of the universe. If you don’t, that seems intuitively wrong also, analogous to a group of people who don’t take advantage of all possible benefits from trade. (Maybe you can even be Dutch booked, e.g. by someone making separate deals/bets with each part of you, although I haven’t thought carefully about this.)
It strikes me as a fine internal bargain for some nonhuman but human-adjacent species; I would not expect the internal parts of a human to able to abide well by that bargain.
I just commented on your linked astronomical waste post:
Wei, insofar as you are making the deal with yourself consider that in the world in which it turns out that the universe could support doing at least 3^^^3 ops you may not be physically capable of changing yourself to work more toward longtermist goals than you would otherwise. (I.e. Human nature is such that making huge sacrifices to your standard of living and quality of life negatively effects your ability to work productively on longtermist goals for years.) If this is the case, then the deal won’t work since one part of you can’t uphold the bargain. So in the world in which it turns out that the universe can support only 10^120 ops you should not devote less effort to longtermism than you would otherwise, despite being physically capable of devoting less effort.
In a related kind of deal, both parts of you may be capable of upholding the deal, in which case I think such deals may be valid. But it seems to me that you don’t need UDT-like reasoning and the deal future to believe that your future self with better knowledge of the size of the cosmic endowment ought to change his behavior in the same way as implied by the deal argument. Example: If you’re a philanthropist with a plan to spend $X of your wealth on shortermist philanthropy and $X on longtermist-philanthropy when you’re initially uncertain about the size of the cosmic endowment because you think this is optimal given your current beliefs and uncertainty, then when you later find out that the universe can support 3^^^3 ops I think this should cause you to shift how you spend your $2X to give more toward longtermist philanthropy just because the longtermist philanthropic opportunities now just seem more valuable. Similarly, if you find out that the universe can only support 10^120, then you ought to update to giving more toward short-termist philanthropy.
So is there really a case for UDT-like reasoning plus hypothetical deals our past selves could have made with themselves suggesting that we ought to behave differently than more common reasoning suggests we ought to behave when we learn new things about the world? I don’t see it.
Adding to this what’s relevant to this thread, re Eliezer’s model:
it makes a lot more sense to just admit that part of you cares about galaxies and part of you cares about ice cream and say that neither of these parts are going to be suppressed and beaten down inside you.
The way I think about the ‘we can’t suppress and beat down our desire for ice cream’ is that it’s part of our nature to want ice cream meaning that we literally can’t just stop having ice cream, at least not without it harming our ability to pursue longtermist goals. (This is what I was referring to when I said above that the longtermist part of you would not be able to fulfill its end of the bargain in the world in which it turns out that the universe can support 3^^^3 ops.)
And we should not deny this fact about ourselves. Rather, we should accept it and go about eating ice cream, caring for ourselves, and working on short-termist goals that are important to us (e.g. reducing global poverty even in cases when it makes no difference to the long term future, to use David’s example from the OP).
To do otherwise is to try to suppress and beat something out of you that cannot be taken out of you without harming your ability to productively pursue longtermist goals. (What I’m saying is similar to Julia’s Cheerfully post.)
I don’t think this is a rationalization in general, though it can be in some cases. Rather, in general, I think it is the correct attitude to take (given a “strong longtermist” view) in response to certain facts about our human nature.
The easiest way to see this is just to look at other people in the world who have done a lot of good or who are doing a lot of good currently. They have not beaten the part of themselves that likes ice cream out of themselves. As such, it is not a rationalization for you to make peace with the fact that you like ice cream and fulfill those wants of yours. Rather, that is the smart thing to do to allow to you to have more cheer and motivation to productively work on longtermist goals.
So I don’t have any problem with the conclusion that the overwhelming majority of expected value lies in the long term future. I don’t feel any need to reject this conclusion and tell myself that I should accept a different bottom line that reads that 50% of the value is in the long term future and 50% in the short term. Perhaps the behavioral policy I ought to follow is one in which I devote 50% of my time and effort and to myself and my personal goals and 50% of my time and effort to longtermist goals, but that’s not because that’s not because the satisfaction I get from eating ice cream has great intrinsic value relative to future lives, it’s because trying to devote much more of my time and effort to longtermist goals is counterproductive to the goal of advancing those longtermist goals. We know it’s generally counterproductive because the other people in the world doing the most longtermist good are not actively trying to deny the part of themselves that cares about things like ice cream.
This isn’t really relevant to the point I was making, but the idea that longtermism has objective long-term value, but ice cream now is a moral failing seems to presuppose moral objectivism. And that seems be be your claim—the only reason to value ice cream now is to make us better at improving the long term in practice. And I’m wondering why “humans are intrinsically unable to get rid of value X” is a criticism / shortcoming, rather than a statement about our values that should be considered in maximization. (To some extent, the argument for why to change out values is about coherency / stable time preferences, but that doesn’t seem to be the claim here.)
I’m not sure I know what you mean by “moral objectivism” here. To try to clarify my view, I’m a moral anti-realist (though I don’t think that’s relevant to my point) and I’m fairly confident that the following is true about my values: the intrinsic value of my enjoyment of ice cream is no greater than the intrinsic value of other individuals’ enjoyment of ice cream (assuming their minds are like mine and can enjoy it in the same way), including future individuals. I think we live at a time in history where our expected effect on the number of individuals that ultimately come into existence and enjoy ice cream is enormous. As such, the instrumental value of my actions (such as my action to eat or not eat ice cream) generally dwarfs the intrinsic value of my conscious experience that results from my actions. So it’s not that there’s zero intrinsic value to my enjoyment of ice cream, it’s just that that intrinsic value is quite trivial in comparison to the net difference in value of the future conscious experiences that come into existence as a result of my decision to eat ice cream.
The fact that I have to spend some resources on making myself happy in order to do the best job at maxizing value overall (which mostly looks like productively contributing to longtermist goals in my view) is just a fact about my nature. I don’t see it as a criticism or shortcoming of my or human nature, just a thing that is true. So our preferences do matter also; it just happens that when trying to do the most good we find that it’s much easier to do good for future generations in expectation than it is to do good for ourselves. So the best thing to do ends up being to help ourselves to the degree that helps us help future generations the most (such that helping ourselves any more or less causes us to do less for longtermism). I think humane nature is such that that optimal balance looks like us making ourselves happy, as opposed to us making great sacrifices and living lives of misery for the greater good.
Let me know if you’re still unsure why I take the view that I do.
I think I can restate your view; there is no moral objective truth, but individual future lives are equally valuable to individual present lives, (I assume we will ignore the epistemic and economic arguments for now,) and your life in particular has no larger claim on your values than anyone else’s.
That certainly isn’t incoherent, but I think it’s a view that few are willing to embrace—at least in part because even though you do admit that personal happiness, or caring for those close to you, is instrumentally useful, you also claim that it’s entirely contingent, and that if new evidence were to emerge, you would endorse requiring personal pain to pursue greater future or global benefits.
I think that’s an accurate restatement of my view, with the caveat that I do have some moral uncertainty, i.e. give some weight to the possibility that my true moral values may be different. Additionally, I wouldn’t necessarily endorse that people be morally required to endure personal pain; personal pain would just be necessary to do greater amounts of good.
I think the important takeaway is that doing good for future generations via reducing existential risk is probably incredibly important, i.e. much more than half of expected future value exists in the long-term future (beyond a few centuries or millenia from now).
Have you read Is the potential astronomical waste in our universe too small to care about? which asks the question, should these two parts of you make a (mutually beneficial) deal/bet while being uncertain of the size of (the reachable part of) the universe, such that the part of you that cares about galaxies gets more votes in a bigger universe, and vice versa? I have not been able to find a philosophically satisfactory answer to this question.
If you do, then one or the other part of you will end up with almost all of the votes when you find out for sure the actual size of the universe. If you don’t, that seems intuitively wrong also, analogous to a group of people who don’t take advantage of all possible benefits from trade. (Maybe you can even be Dutch booked, e.g. by someone making separate deals/bets with each part of you, although I haven’t thought carefully about this.)
It strikes me as a fine internal bargain for some nonhuman but human-adjacent species; I would not expect the internal parts of a human to able to abide well by that bargain.
I just commented on your linked astronomical waste post:
Adding to this what’s relevant to this thread, re Eliezer’s model:
The way I think about the ‘we can’t suppress and beat down our desire for ice cream’ is that it’s part of our nature to want ice cream meaning that we literally can’t just stop having ice cream, at least not without it harming our ability to pursue longtermist goals. (This is what I was referring to when I said above that the longtermist part of you would not be able to fulfill its end of the bargain in the world in which it turns out that the universe can support 3^^^3 ops.)
And we should not deny this fact about ourselves. Rather, we should accept it and go about eating ice cream, caring for ourselves, and working on short-termist goals that are important to us (e.g. reducing global poverty even in cases when it makes no difference to the long term future, to use David’s example from the OP).
To do otherwise is to try to suppress and beat something out of you that cannot be taken out of you without harming your ability to productively pursue longtermist goals. (What I’m saying is similar to Julia’s Cheerfully post.)
I don’t think this is a rationalization in general, though it can be in some cases. Rather, in general, I think it is the correct attitude to take (given a “strong longtermist” view) in response to certain facts about our human nature.
The easiest way to see this is just to look at other people in the world who have done a lot of good or who are doing a lot of good currently. They have not beaten the part of themselves that likes ice cream out of themselves. As such, it is not a rationalization for you to make peace with the fact that you like ice cream and fulfill those wants of yours. Rather, that is the smart thing to do to allow to you to have more cheer and motivation to productively work on longtermist goals.
So I don’t have any problem with the conclusion that the overwhelming majority of expected value lies in the long term future. I don’t feel any need to reject this conclusion and tell myself that I should accept a different bottom line that reads that 50% of the value is in the long term future and 50% in the short term. Perhaps the behavioral policy I ought to follow is one in which I devote 50% of my time and effort and to myself and my personal goals and 50% of my time and effort to longtermist goals, but that’s not because that’s not because the satisfaction I get from eating ice cream has great intrinsic value relative to future lives, it’s because trying to devote much more of my time and effort to longtermist goals is counterproductive to the goal of advancing those longtermist goals. We know it’s generally counterproductive because the other people in the world doing the most longtermist good are not actively trying to deny the part of themselves that cares about things like ice cream.
This isn’t really relevant to the point I was making, but the idea that longtermism has objective long-term value, but ice cream now is a moral failing seems to presuppose moral objectivism. And that seems be be your claim—the only reason to value ice cream now is to make us better at improving the long term in practice. And I’m wondering why “humans are intrinsically unable to get rid of value X” is a criticism / shortcoming, rather than a statement about our values that should be considered in maximization. (To some extent, the argument for why to change out values is about coherency / stable time preferences, but that doesn’t seem to be the claim here.)
I’m not sure I know what you mean by “moral objectivism” here. To try to clarify my view, I’m a moral anti-realist (though I don’t think that’s relevant to my point) and I’m fairly confident that the following is true about my values: the intrinsic value of my enjoyment of ice cream is no greater than the intrinsic value of other individuals’ enjoyment of ice cream (assuming their minds are like mine and can enjoy it in the same way), including future individuals. I think we live at a time in history where our expected effect on the number of individuals that ultimately come into existence and enjoy ice cream is enormous. As such, the instrumental value of my actions (such as my action to eat or not eat ice cream) generally dwarfs the intrinsic value of my conscious experience that results from my actions. So it’s not that there’s zero intrinsic value to my enjoyment of ice cream, it’s just that that intrinsic value is quite trivial in comparison to the net difference in value of the future conscious experiences that come into existence as a result of my decision to eat ice cream.
The fact that I have to spend some resources on making myself happy in order to do the best job at maxizing value overall (which mostly looks like productively contributing to longtermist goals in my view) is just a fact about my nature. I don’t see it as a criticism or shortcoming of my or human nature, just a thing that is true. So our preferences do matter also; it just happens that when trying to do the most good we find that it’s much easier to do good for future generations in expectation than it is to do good for ourselves. So the best thing to do ends up being to help ourselves to the degree that helps us help future generations the most (such that helping ourselves any more or less causes us to do less for longtermism). I think humane nature is such that that optimal balance looks like us making ourselves happy, as opposed to us making great sacrifices and living lives of misery for the greater good.
Let me know if you’re still unsure why I take the view that I do.
I think I can restate your view; there is no moral objective truth, but individual future lives are equally valuable to individual present lives, (I assume we will ignore the epistemic and economic arguments for now,) and your life in particular has no larger claim on your values than anyone else’s.
That certainly isn’t incoherent, but I think it’s a view that few are willing to embrace—at least in part because even though you do admit that personal happiness, or caring for those close to you, is instrumentally useful, you also claim that it’s entirely contingent, and that if new evidence were to emerge, you would endorse requiring personal pain to pursue greater future or global benefits.
I think that’s an accurate restatement of my view, with the caveat that I do have some moral uncertainty, i.e. give some weight to the possibility that my true moral values may be different. Additionally, I wouldn’t necessarily endorse that people be morally required to endure personal pain; personal pain would just be necessary to do greater amounts of good.
I think the important takeaway is that doing good for future generations via reducing existential risk is probably incredibly important, i.e. much more than half of expected future value exists in the long-term future (beyond a few centuries or millenia from now).
I had not seen this, and it definitely seem relevant—but it’s still much closer to strong longtermism than what I’m (tentatively) suggesting.