(One side-bar is that you should ideally do a global Nash bargain, over all of your values rather than bargaining over particular issues like the suffering of wildlife. This is so that people can rank all of their values and get the stuff that is most important to them. If you care a lot about wild animal suffering and nothing about hedonium and I care a lot about hedonium and nothing about WAS, a good trade is that we have no wild animal suffering and lots of hedonium. This is very hard to do but theoretically optimal.)
I have a slide deck on this solution if you’d like to see it!
We implemented a Nash bargain solution in our moral parliament and I came away the impression that the results of Nash bargaining are very sensitive to your choice of defaults and for plausible defaults true bargains can be pretty rare. Anyone who is happy with defaults gets disproportionate bargaining power. One default might be ‘no future at all’, but that’s going to make it hard to find any bargain with the anti-natalists. Another default might be ‘just more of the same’, but again, someone might like that and oppose any bargain that deviates much. Have you given much thought to picking the right default against which to measure people’s preferences? (Or is the thought that you would just exclude obstinate minorities?)
You (and @tylermjohn) might be interested in Diffractor’s Unifying Bargaining sequence. The sequence focuses on transferable utility games being a better target than just bargaining games, with I believe Nash being a special-case for bargaining games. As well as talking about avoiding threats in bargaining and trying to further refine. I think the defaults won’t matter too much. Do you have any writing on the moral parliament that talks about the defaults issue more?
Thanks for the suggestion. I’m interested in the issue of dealing with threats in bargaining.
I don’t think we ever published anything specifically on the defaults issue.
We were focused on allocating a budget that respects the priorities of different worldviews. The central thing we were encountering was that we started by taking the defaults to be the allocation you get by giving everyone their own slice of the total budget and spending it as they wanted. Since there are often options that are well-suited to each different worldview, there is no way to get good compromises. Everyone is happier with the default than any adjustment of it. (More here.) On the other hand, if you switch the default to be some sort of neutral 0 value (assuming that can be defined), then you will get compromises, but many bargainers would rather that they just be given their own slice of the total budget to allocate.
I think the importance of defaults comes through just by playing around with some numbers. Consider the difference between setting the default to be the status quo trajectory we’re currently on and setting the default to be the worst possible outcome. Suppose we have two worldviews, one of which cares about suffering in all other people linearly, and the other of which is very locally focused and doesn’t care about immense suffering elsewhere. For the two worldviews, relative to the status quo, option A might give (worldview1: 2,worldview2: 10) value and option B might give (4,6) value. Against this default, option B has a higher product (24 vs 20) and is preferred by Nash bargaining. However, relative to the worst possible value default, option A might give (10,002, 12) and option B (10,004, 8), then option A would be preferred to option B (~120k vs 80k).
I agree defaults are a problem, especially with large choice problems involving many people. I honestly haven’t given this much thought, and assume we’ll just have to sacrifice someone or some desideratum to get tractability, and that will kind of suck but such is life.
I’m more wedded to Nash’s preference prioritarianism than the specific set-up, but I do see that once you get rid of Pareto efficiency relative to the disagreement point it’s not going to be individually rational for everyone to participate. Which is sad.
In the traditional Nash bargaining setup you evaluate people’s utilities in options relative to the default scenario, and only consider options that make everyone at least as well off. This makes it individually rational for everyone to participate because they will be made better off by the bargain. That’s different from, say, range voting.
(One side-bar is that you should ideally do a global Nash bargain, over all of your values rather than bargaining over particular issues like the suffering of wildlife. This is so that people can rank all of their values and get the stuff that is most important to them. If you care a lot about wild animal suffering and nothing about hedonium and I care a lot about hedonium and nothing about WAS, a good trade is that we have no wild animal suffering and lots of hedonium. This is very hard to do but theoretically optimal.)
I have a slide deck on this solution if you’d like to see it!
We implemented a Nash bargain solution in our moral parliament and I came away the impression that the results of Nash bargaining are very sensitive to your choice of defaults and for plausible defaults true bargains can be pretty rare. Anyone who is happy with defaults gets disproportionate bargaining power. One default might be ‘no future at all’, but that’s going to make it hard to find any bargain with the anti-natalists. Another default might be ‘just more of the same’, but again, someone might like that and oppose any bargain that deviates much. Have you given much thought to picking the right default against which to measure people’s preferences? (Or is the thought that you would just exclude obstinate minorities?)
You (and @tylermjohn) might be interested in Diffractor’s Unifying Bargaining sequence. The sequence focuses on transferable utility games being a better target than just bargaining games, with I believe Nash being a special-case for bargaining games. As well as talking about avoiding threats in bargaining and trying to further refine.
I think the defaults won’t matter too much. Do you have any writing on the moral parliament that talks about the defaults issue more?
Thanks for the suggestion. I’m interested in the issue of dealing with threats in bargaining.
I don’t think we ever published anything specifically on the defaults issue.
We were focused on allocating a budget that respects the priorities of different worldviews. The central thing we were encountering was that we started by taking the defaults to be the allocation you get by giving everyone their own slice of the total budget and spending it as they wanted. Since there are often options that are well-suited to each different worldview, there is no way to get good compromises. Everyone is happier with the default than any adjustment of it. (More here.) On the other hand, if you switch the default to be some sort of neutral 0 value (assuming that can be defined), then you will get compromises, but many bargainers would rather that they just be given their own slice of the total budget to allocate.
I think the importance of defaults comes through just by playing around with some numbers. Consider the difference between setting the default to be the status quo trajectory we’re currently on and setting the default to be the worst possible outcome. Suppose we have two worldviews, one of which cares about suffering in all other people linearly, and the other of which is very locally focused and doesn’t care about immense suffering elsewhere. For the two worldviews, relative to the status quo, option A might give (worldview1: 2,worldview2: 10) value and option B might give (4,6) value. Against this default, option B has a higher product (24 vs 20) and is preferred by Nash bargaining. However, relative to the worst possible value default, option A might give (10,002, 12) and option B (10,004, 8), then option A would be preferred to option B (~120k vs 80k).
Nice! I’ll have to read this.
I agree defaults are a problem, especially with large choice problems involving many people. I honestly haven’t given this much thought, and assume we’ll just have to sacrifice someone or some desideratum to get tractability, and that will kind of suck but such is life.
I’m more wedded to Nash’s preference prioritarianism than the specific set-up, but I do see that once you get rid of Pareto efficiency relative to the disagreement point it’s not going to be individually rational for everyone to participate. Which is sad.
what do you mean “default”? you just have a utility for each option and the best option is the one that maximizes net utility.
https://www.rangevoting.org/BayRegDum
In the traditional Nash bargaining setup you evaluate people’s utilities in options relative to the default scenario, and only consider options that make everyone at least as well off. This makes it individually rational for everyone to participate because they will be made better off by the bargain. That’s different from, say, range voting.