Person-affecting views arenāt necessarily intransitive; they might instead give up the independence of irrelevant alternatives, so that Aā„B among one set of options, but A<B among another set of options. I think this is actually an intuitive way to explain the repugnant conclusion:
I think this makes more sense than initial appearances.
If A+ is the current world and B is possible, then the well-off people in A+ have an obligation to move to B (because B>A).
If A is the current world, A+ is possible but B impossible, then the people in A incur no new obligations by moving to A+, hence indifference.
If A is the current world and both A+ and B are possible, then moving to A+ saddles the original people with an obligation to further move the world to B. But the people in A, by supposition, donāt derive any benefit from the move to A+ and the obligation to move to B harms them. On the other hand, the new people in A+ donāt matter because they donāt exist in A. Thus A+>A in this case.
Basically: options create obligations, and when weāre assessing the goodness of a world we need to take into account welfare + obligations (somehow).
Iām really showing my lack of technical savy today, but I donāt really know how to embed images, so Iāll have to sort of awkwardly describe this.
For the classic version of the mere addition paradox this seems like an open possibility for a person affecting view, but I think you can force pretty much any person affecting view into intransitivity if you use the version in which every step looks like some version of A+. In other words, you start with something like A+, then in the next world, you have one bar that looks like B, and in addition another, lower but equally wide bar, then in the next step, you equalize to higher than the average of those in a B-like manner, and in addition another equally wide, lower bar appears, etc. This seems to demand basically any person affecting view prefer the next step to the one before it, but the step two back to that one.
Views can be transitive within each option set, but have previous pairwise rankings changed as the option set changes, e.g. new options become available. I think youāre just calling this intransitivity, but itās not technically intransitivity by definition, and is instead a violation of the independence of irrelevant alternatives.
Transitivity + violating IIA seems more plausible to me than intransitivity, since the former is more action-guiding.
I agree that thereās a difference, but I donāt see how that contradicts the counter example I just gave. Imagine a person affecting view that is presented with every possible combination of people/āwelfare levels as options, I am suggesting that, even if it is sensitive to irrelevant alternatives, it will have strong principled reasons to favor some of the options in this set cyclically if not doing so means ranking a world that is better on average for the pool of people the two have in common lower. Or maybe Iām misunderstanding what youāre saying?
There are person-affecting views that will rank X<Y or otherwise not choose X over Y even if the average welfare of the individuals common to both X and Y is higher in X.
A necessitarian view might just look at all the people common to all available options at once, maximize their average welfare, and then ignore contingent people (or use them to break ties, say). Many individuals common to two options X and Y could be ignored this way, because they arenāt common to all available options, and so are still contingent.
Christopher J. G. Meacham, 2012 (EA Forum discussion here) describes another transitive person-affecting view, where I think something like āthe available alternatives are so relevant, that they can even overwhelm one world being better on average than another for every person the two have in commonā, which you mentioned in your reply, is basically true. For each option, and each individual in the the option, we take the difference between their maximum welfare across options and their welfare in that option, add up them up, and then minimize the sum. Crucially, itās assumed when someone doesnāt exist in an option, we donāt add their welfare loss from their maximum for that option, and when someone has a negative welfare in an option but donāt exist in another option, their maximum welfare across options will at least be 0. There are some technical details for matching individuals with different identities across worlds when there are people who arenāt common to all options. So, in the repugnant conclusion, introducing B makes A>A+, because it raises the maximum welfares of the extra people in A+.
Some views may start from pairwise comparisons that would give the kinds of cycles you described, but then apply a voting method like beatpath voting to rerank or select options and avoid cycles within option sets. This is done in Teruji Thomas, 2019. I personally find this sort of approach most promising.
This is interesting, Iām especially interested in the idea of applying voting methods to ranking dilemmas like this, which Iām noticing is getting more common. On the other hand it sounds to me like person-affecting views mostly solve transitivity problems by functionally becoming less person-affecting in a strong, principled sense, except in toy cases. Meacham sounds like it converges to averagism on steroids from your description as you test it against a larger and more open range of possibilities (worse off people loses a world points, but so does more people, since it sums the differences up). If you modify it to look at the average of these differences, then the theory seems like it becomes vulnerable to the repugnant conclusion again, as the quantity of added people who are better off in one step in the argument than the last can wash out the larger per-individual difference for those who existed since earlier steps. Meanwhile the necessitarian view as you describe it seems to yield either no results in practice if taken as described in a large set of worlds with no one common to every world, or if reinterpreted to only include the people common to the very most worlds, sort of gives you a utility monster situation in which a single person, or some small range of possible people, determine almost all of the value across all different worlds. All of this does avoid intransitivity though as you say.
Or I guess maybe it could say that the available alternatives are so relevant, that they can even overwhelm one world being better on average than another for every person the two have in common?
Person-affecting views arenāt necessarily intransitive; they might instead give up the independence of irrelevant alternatives, so that Aā„B among one set of options, but A<B among another set of options. I think this is actually an intuitive way to explain the repugnant conclusion:
If your available options are S, then the rankings among them are __:
S={A, A+, B}: A>B, B>A+, A>A+
S={A, A+}: A+ā„A
S={A, B}: A>B
S={A+, B}: B>A+
A person-affecting view would need to explain why A>A+ when all three options are available, but A+ā„A when only A+ and A are available.
However, violating IIA like this is also vulnerable to a Dutch book/āmoney pump.
I think this makes more sense than initial appearances.
If A+ is the current world and B is possible, then the well-off people in A+ have an obligation to move to B (because B>A).
If A is the current world, A+ is possible but B impossible, then the people in A incur no new obligations by moving to A+, hence indifference.
If A is the current world and both A+ and B are possible, then moving to A+ saddles the original people with an obligation to further move the world to B. But the people in A, by supposition, donāt derive any benefit from the move to A+ and the obligation to move to B harms them. On the other hand, the new people in A+ donāt matter because they donāt exist in A. Thus A+>A in this case.
Basically: options create obligations, and when weāre assessing the goodness of a world we need to take into account welfare + obligations (somehow).
Iām really showing my lack of technical savy today, but I donāt really know how to embed images, so Iāll have to sort of awkwardly describe this.
For the classic version of the mere addition paradox this seems like an open possibility for a person affecting view, but I think you can force pretty much any person affecting view into intransitivity if you use the version in which every step looks like some version of A+. In other words, you start with something like A+, then in the next world, you have one bar that looks like B, and in addition another, lower but equally wide bar, then in the next step, you equalize to higher than the average of those in a B-like manner, and in addition another equally wide, lower bar appears, etc. This seems to demand basically any person affecting view prefer the next step to the one before it, but the step two back to that one.
Views can be transitive within each option set, but have previous pairwise rankings changed as the option set changes, e.g. new options become available. I think youāre just calling this intransitivity, but itās not technically intransitivity by definition, and is instead a violation of the independence of irrelevant alternatives.
Transitivity + violating IIA seems more plausible to me than intransitivity, since the former is more action-guiding.
I agree that thereās a difference, but I donāt see how that contradicts the counter example I just gave. Imagine a person affecting view that is presented with every possible combination of people/āwelfare levels as options, I am suggesting that, even if it is sensitive to irrelevant alternatives, it will have strong principled reasons to favor some of the options in this set cyclically if not doing so means ranking a world that is better on average for the pool of people the two have in common lower. Or maybe Iām misunderstanding what youāre saying?
There are person-affecting views that will rank X<Y or otherwise not choose X over Y even if the average welfare of the individuals common to both X and Y is higher in X.
A necessitarian view might just look at all the people common to all available options at once, maximize their average welfare, and then ignore contingent people (or use them to break ties, say). Many individuals common to two options X and Y could be ignored this way, because they arenāt common to all available options, and so are still contingent.
Christopher J. G. Meacham, 2012 (EA Forum discussion here) describes another transitive person-affecting view, where I think something like āthe available alternatives are so relevant, that they can even overwhelm one world being better on average than another for every person the two have in commonā, which you mentioned in your reply, is basically true. For each option, and each individual in the the option, we take the difference between their maximum welfare across options and their welfare in that option, add up them up, and then minimize the sum. Crucially, itās assumed when someone doesnāt exist in an option, we donāt add their welfare loss from their maximum for that option, and when someone has a negative welfare in an option but donāt exist in another option, their maximum welfare across options will at least be 0. There are some technical details for matching individuals with different identities across worlds when there are people who arenāt common to all options. So, in the repugnant conclusion, introducing B makes A>A+, because it raises the maximum welfares of the extra people in A+.
Some views may start from pairwise comparisons that would give the kinds of cycles you described, but then apply a voting method like beatpath voting to rerank or select options and avoid cycles within option sets. This is done in Teruji Thomas, 2019. I personally find this sort of approach most promising.
This is interesting, Iām especially interested in the idea of applying voting methods to ranking dilemmas like this, which Iām noticing is getting more common. On the other hand it sounds to me like person-affecting views mostly solve transitivity problems by functionally becoming less person-affecting in a strong, principled sense, except in toy cases. Meacham sounds like it converges to averagism on steroids from your description as you test it against a larger and more open range of possibilities (worse off people loses a world points, but so does more people, since it sums the differences up). If you modify it to look at the average of these differences, then the theory seems like it becomes vulnerable to the repugnant conclusion again, as the quantity of added people who are better off in one step in the argument than the last can wash out the larger per-individual difference for those who existed since earlier steps. Meanwhile the necessitarian view as you describe it seems to yield either no results in practice if taken as described in a large set of worlds with no one common to every world, or if reinterpreted to only include the people common to the very most worlds, sort of gives you a utility monster situation in which a single person, or some small range of possible people, determine almost all of the value across all different worlds. All of this does avoid intransitivity though as you say.
Or I guess maybe it could say that the available alternatives are so relevant, that they can even overwhelm one world being better on average than another for every person the two have in common?