I really like this piece, and I think I share in a lot of these views. Just on some fairly minor points:
Deep Incommensurability. It seems like incommensurability helps with regards to avoiding MPL, but not actually that much. For example, there seem many moral theories (ie something that is somewhat like Person Affecting Views) that are incommensurable (or indifferent) between different size worlds, but not different qualities. So they may really care if it is a world of humans, or insects, or hedonium.
I can imagine views (they do run into non-identity, but maybe there is ways of formulating them that don’t) that this would be a real problem. For example, imagine a view that holds that simulated human existence if the best form of life, but is indifferent between that and non-existence. As such, they won’t care whether we leave the universe insentient, but faced with a pair-wise choice between hedonium and simulated humans, they will take the simulated humans everytime. So they don’t care much if we do extinct, but do care if the hedonistic utilitarians win. indeed, these views may be even less willing to take trades than many views that care about quantity. I imagine many religions, particularly universalist religions like Christianity and Islam, may actually fall into this category.
I think some more discussion of the ‘kinetics’ vs ‘equilibrium’ point you sort of allude to seems pretty interesting. I think you could reasonably hold the view that rational (or sensing or whatever other sort of beings) beings converge to moral correctness in infinite time. But we are likely not waiting infinite time before locking in decisions that cannot be reversed. Thus, because irreversible moral decisions could occur at a faster rate than correct moral convergence (ie the kinetics of the process is more important than what it would be at equilibrium), we shouldn’t expect the equilibrium process to dominate. I think you gesture towards this, but I think exploration of the ordering further would be very interesting.
I also wonder if views that are pluralist rather than monist about value may make the MPL problem worse or better. I think I could see arguments either way, depending on exactly how those views are formulated, but would be interesting to explore.
Very interesting piece anyway, thanks a lot, and really resonates with a lot I’ve been thinking about
I’m sure I’ll have a few more comments at some point as I revisit the essay.
Thomas (2019) calls these sorts of person-affecting views “wide”. I think “narrow” person-affecting views can be more liberal (due to incommensurability) about what kinds of beings are brought about.
And narrow asymmetric person-affecting views, as in Thomas, 2019 and Pummer, 2024, can still tell you to prevent “bad” lives or bads in lives, but, contrary to antinatalist views, “good” lives and goods in lives can still offset the bad. Pummer (2024) solves a special case of the Nonidentity problem this way, by looking at goods and bads in lives.
But these asymmetric views may be less liberal than strict/symmetric narrow person-affecting views, because they could be inclined to prevent the sorts of lives of which many are bad in favour of better average lives. Or more liberal, depending on how you think of liberalism. If someone would have a horrible life to which they would object, it seems illiberal to force them to have it.
I think these papers have made some pretty important progress in further developing person-affecting views.[1]
I think they need to be better adapted to choices between more than 2 options, in order to avoid the Repugnant Conclusion and replacement (St. Jules, 2024). I’ve been working on this and have a tentative solution, but I’m struggling to find anyone interested in reading my draft.
Thanks a lot, @Gideon Futerman! Good additions, which all seem right to me.
Another question I’ve had on my mind is how much MPL is related to additive separability. You might at a first pass think that moral atomism makes you more likely to buy MPL, since you have so many different spaces of value to optimise. But holistic views can in principle lead to even sharper differences in the value of worlds — for example, you could have a view that says that you need to align all of the stars forever or you don’t capture any value.
I’d like to have a clearer view on when moral viewpoints will tend to end up at MPL, but I don’t have one yet.
I really like this piece, and I think I share in a lot of these views. Just on some fairly minor points:
Deep Incommensurability. It seems like incommensurability helps with regards to avoiding MPL, but not actually that much. For example, there seem many moral theories (ie something that is somewhat like Person Affecting Views) that are incommensurable (or indifferent) between different size worlds, but not different qualities. So they may really care if it is a world of humans, or insects, or hedonium.
I can imagine views (they do run into non-identity, but maybe there is ways of formulating them that don’t) that this would be a real problem. For example, imagine a view that holds that simulated human existence if the best form of life, but is indifferent between that and non-existence. As such, they won’t care whether we leave the universe insentient, but faced with a pair-wise choice between hedonium and simulated humans, they will take the simulated humans everytime. So they don’t care much if we do extinct, but do care if the hedonistic utilitarians win. indeed, these views may be even less willing to take trades than many views that care about quantity. I imagine many religions, particularly universalist religions like Christianity and Islam, may actually fall into this category.
I think some more discussion of the ‘kinetics’ vs ‘equilibrium’ point you sort of allude to seems pretty interesting. I think you could reasonably hold the view that rational (or sensing or whatever other sort of beings) beings converge to moral correctness in infinite time. But we are likely not waiting infinite time before locking in decisions that cannot be reversed. Thus, because irreversible moral decisions could occur at a faster rate than correct moral convergence (ie the kinetics of the process is more important than what it would be at equilibrium), we shouldn’t expect the equilibrium process to dominate. I think you gesture towards this, but I think exploration of the ordering further would be very interesting.
I also wonder if views that are pluralist rather than monist about value may make the MPL problem worse or better. I think I could see arguments either way, depending on exactly how those views are formulated, but would be interesting to explore.
Very interesting piece anyway, thanks a lot, and really resonates with a lot I’ve been thinking about
I’m sure I’ll have a few more comments at some point as I revisit the essay.
(Edited to elaborate and for clarity.)
Thomas (2019) calls these sorts of person-affecting views “wide”. I think “narrow” person-affecting views can be more liberal (due to incommensurability) about what kinds of beings are brought about.
And narrow asymmetric person-affecting views, as in Thomas, 2019 and Pummer, 2024, can still tell you to prevent “bad” lives or bads in lives, but, contrary to antinatalist views, “good” lives and goods in lives can still offset the bad. Pummer (2024) solves a special case of the Nonidentity problem this way, by looking at goods and bads in lives.
But these asymmetric views may be less liberal than strict/symmetric narrow person-affecting views, because they could be inclined to prevent the sorts of lives of which many are bad in favour of better average lives. Or more liberal, depending on how you think of liberalism. If someone would have a horrible life to which they would object, it seems illiberal to force them to have it.
I think these papers have made some pretty important progress in further developing person-affecting views.[1]
I think they need to be better adapted to choices between more than 2 options, in order to avoid the Repugnant Conclusion and replacement (St. Jules, 2024). I’ve been working on this and have a tentative solution, but I’m struggling to find anyone interested in reading my draft.
Thanks a lot, @Gideon Futerman! Good additions, which all seem right to me.
Another question I’ve had on my mind is how much MPL is related to additive separability. You might at a first pass think that moral atomism makes you more likely to buy MPL, since you have so many different spaces of value to optimise. But holistic views can in principle lead to even sharper differences in the value of worlds — for example, you could have a view that says that you need to align all of the stars forever or you don’t capture any value.
I’d like to have a clearer view on when moral viewpoints will tend to end up at MPL, but I don’t have one yet.