Thanks for writing this up! I feel like this overlaps in places with some of the research directions that are relevant to evidential cooperation in large worlds.
How should we idealize preferences, or how would past you or past conservationalist have wanted their preferences to be idealized. Did they want to optimize for the preferences of individual animals and just thought that conservation was a way to achieve that? Brian Tomasik could perhaps convince them otherwise. Maybe the maximally cute world optimizes for the preferences of the animals that are left much better than the real world does? Maybe it is biased toward mammals, with few offspring, so the happiness/suffering ratio is better? Even if the past conservationalist thinks that it’s not optimal, may it is better than the past world even by their lights. In other contexts, the preferences of the past people may be conditioned on the environments they found themselves in. If today’s environments are different, maybe they’d think that their past priorities are no longer applicable or necessary.
How do the mechanics of the trade work? Superrationality hasn’t been around for long, so even the more EDT-leaning Calvinists may not actually be superrational cooperation partners. It’s also unclear how much weight past cooperators will have in the acausal bargaining solution. Maybe it’s not so much a trade than a commitment that we hope to make strong enough to also bind future generations to it. Then it’s more a question of how to design particularly permanent institutions.
If it’s a commitment that we hope to bind future generations to, we’ll have to make various tradeoffs – how much commitment maximizes the expected commitment because too much could cause future generations to abandon it altogether. Also all the questions of preference idealization that trade off the risk of self-serving bias against the risk of acting against the informed preferences of the people you want to help.
There’s also an interesting phenomenon where the belief in moral progress (whether justified or not) will bias everyone to think that future generations will have an easier time implementing the commitment. At least if they conceive of moral progress converging on some optimum. The difference to the past generation will continually shrink, so the further you are in the future the less do your commitments diverge from what you would’ve wanted anyway.
I also wonder how I should conceive of dead people and past versions of myself compared to people who are very set in their ways. If there’s a person I like who has weird beliefs and explicitly refuses to update away from them no matter how much contradictory evidence they see, my respect for the person’s epistemic opinions will wane to some extent. A dead person or a past version of myself is (de facto and de jure respectively) a particularly extreme version of such a person. So I want to respect them and their preferences, but I’ll probably discount the weight of their views in my moral compromise because I don’t trust their epistemics much.
I find this all very interesting and would in particular be interested in any examples of preferences of past people that are (1) strongly held even after slight idealization, (2) strong preferences, and (3) fairly easy to satisfy today. Past people are probably rather few, but some of them might be quite powerful because of their leverage over the future, and generally I want to protect the strong preferences of minorities in my moral compromise.
Thanks for writing this up! I feel like this overlaps in places with some of the research directions that are relevant to evidential cooperation in large worlds.
How should we idealize preferences, or how would past you or past conservationalist have wanted their preferences to be idealized. Did they want to optimize for the preferences of individual animals and just thought that conservation was a way to achieve that? Brian Tomasik could perhaps convince them otherwise. Maybe the maximally cute world optimizes for the preferences of the animals that are left much better than the real world does? Maybe it is biased toward mammals, with few offspring, so the happiness/suffering ratio is better? Even if the past conservationalist thinks that it’s not optimal, may it is better than the past world even by their lights. In other contexts, the preferences of the past people may be conditioned on the environments they found themselves in. If today’s environments are different, maybe they’d think that their past priorities are no longer applicable or necessary.
How do the mechanics of the trade work? Superrationality hasn’t been around for long, so even the more EDT-leaning Calvinists may not actually be superrational cooperation partners. It’s also unclear how much weight past cooperators will have in the acausal bargaining solution. Maybe it’s not so much a trade than a commitment that we hope to make strong enough to also bind future generations to it. Then it’s more a question of how to design particularly permanent institutions.
If it’s a commitment that we hope to bind future generations to, we’ll have to make various tradeoffs – how much commitment maximizes the expected commitment because too much could cause future generations to abandon it altogether. Also all the questions of preference idealization that trade off the risk of self-serving bias against the risk of acting against the informed preferences of the people you want to help.
There’s also an interesting phenomenon where the belief in moral progress (whether justified or not) will bias everyone to think that future generations will have an easier time implementing the commitment. At least if they conceive of moral progress converging on some optimum. The difference to the past generation will continually shrink, so the further you are in the future the less do your commitments diverge from what you would’ve wanted anyway.
I also wonder how I should conceive of dead people and past versions of myself compared to people who are very set in their ways. If there’s a person I like who has weird beliefs and explicitly refuses to update away from them no matter how much contradictory evidence they see, my respect for the person’s epistemic opinions will wane to some extent. A dead person or a past version of myself is (de facto and de jure respectively) a particularly extreme version of such a person. So I want to respect them and their preferences, but I’ll probably discount the weight of their views in my moral compromise because I don’t trust their epistemics much.
I find this all very interesting and would in particular be interested in any examples of preferences of past people that are (1) strongly held even after slight idealization, (2) strong preferences, and (3) fairly easy to satisfy today. Past people are probably rather few, but some of them might be quite powerful because of their leverage over the future, and generally I want to protect the strong preferences of minorities in my moral compromise.