This is a really wonderful post, Joe. When I receive notifications for your posts, I feel like I’m put in touch with the excitement that people in the 1800s might have felt when being delivered newspapers containing serially published chapters of famous novels. : )
Okay, enough buttering up. Onto objections.
I very much like your notions of taking responsibility, and of seeing yourself whole. However, I object to certain ways you take yourself to be applying these criteria.
(I’ll respond in two comments; the points are related, but I wanted to make it easier to respond to each point independently)
1. Understanding who we are, and who it’s possible to be
My first point of pushback: I think that your suggested way of engaging with population axiology can, in many cases, impede one’s ability to take full responsibility for one’s values, through improperly narrowing the space of who it’s possible to be.
When I ask myself why I care about understanding what it’s possible to be, it’s because I care about who I can be — what sort of thing, with what principles, will the world allow me to be?
In your discussion of Utopia and Lizards, you could straightforwardly bring out a contradiction in the views of your interlocutor, because you engineered a direct comparison between concrete worlds, in a way that was analogous to the repugnant conclusion.
Moreover, your interlocutor endorsed certain principles that were collectively inconsistent. You need to have your interlocutor endorse principles, because you don’t get inconsistency results from mere behavior.
People can just decide between concrete worlds however they like. You can only show that someone is inconsistent if they take themselves to be acting on the basis of incompatible principles.
I agree that doing ethics (broadly construed) can, for the anti-realist, help them understand which sets of principles it even makes sense to endorse as a whole. So I agree with your abstract claim about ethics helping the anti-realist see which principles they can coherently endorse together. But I also believe that certain kinds of formal theorizing can inhibit our sense of what (or who) it’s possible to be, because certain kinds of theorizing can (incorrectly) lead us to believe that we are operating within a space which captures the only possible way to model our moral commitments.
For instance: I don’t think that I’m committed to a well-defined, impartial, and context-independent, aggregate welfare ranking with the property of finite fine-grainedness. The axioms of Arrehnius’ impossibility theorem (to which you allude) quantify over welfare levels with well-defined values.
If I reflect on my principles, I don’t find this aggregate welfare measure directly, nor do I see that it’s entailed by any of my other commitments. If I decide on one concrete world over another, I don’t take this to be grounded in a claim about aggregate welfare.
I don’t mean to say that I think there are no unambiguous cases where societies (worlds) are happier than others. Rather, I mean to say that granting some determinate welfare rankings over worlds doesn’t mean that I’m thereby committed to the existence of a well-defined, impartial welfare ranking over worlds in every context.
So: I think I have principles which endorse the claim: ‘Utopia > Lizards’, and I don’t think that leaves me endorsing some unfortunate preference about concrete states of affairs. In Utopia and Lizards, Z (to me) seems obviously worse than A+. In the original Mere Addition Paradox, it’s a bit trickier, because Parfit’s original presentation assumes the existence of ‘an’ aggregate welfare-level, which is meant to represent some (set of) concrete state of affairs. And I think more would need to be said in order to convince me that there’s some fact of the matter about which concrete situations instantiate Parfit’s puzzle.
How does this all relate to your initial defense of moral theorizing? In short, I think that moral theorizing can have benefits (which you suggest), but — from my current perspective — I feel as though moral theorizing can also impose an overly narrow picture of what a consistent moral self-conception must look like.
This is a really wonderful post, Joe. When I receive notifications for your posts, I feel like I’m put in touch with the excitement that people in the 1800s might have felt when being delivered newspapers containing serially published chapters of famous novels. : )
Okay, enough buttering up. Onto objections.
I very much like your notions of taking responsibility, and of seeing yourself whole. However, I object to certain ways you take yourself to be applying these criteria.
(I’ll respond in two comments; the points are related, but I wanted to make it easier to respond to each point independently)
1. Understanding who we are, and who it’s possible to be
My first point of pushback: I think that your suggested way of engaging with population axiology can, in many cases, impede one’s ability to take full responsibility for one’s values, through improperly narrowing the space of who it’s possible to be.
When I ask myself why I care about understanding what it’s possible to be, it’s because I care about who I can be — what sort of thing, with what principles, will the world allow me to be?
In your discussion of Utopia and Lizards, you could straightforwardly bring out a contradiction in the views of your interlocutor, because you engineered a direct comparison between concrete worlds, in a way that was analogous to the repugnant conclusion.
Moreover, your interlocutor endorsed certain principles that were collectively inconsistent. You need to have your interlocutor endorse principles, because you don’t get inconsistency results from mere behavior.
People can just decide between concrete worlds however they like. You can only show that someone is inconsistent if they take themselves to be acting on the basis of incompatible principles.
I agree that doing ethics (broadly construed) can, for the anti-realist, help them understand which sets of principles it even makes sense to endorse as a whole. So I agree with your abstract claim about ethics helping the anti-realist see which principles they can coherently endorse together. But I also believe that certain kinds of formal theorizing can inhibit our sense of what (or who) it’s possible to be, because certain kinds of theorizing can (incorrectly) lead us to believe that we are operating within a space which captures the only possible way to model our moral commitments.
For instance: I don’t think that I’m committed to a well-defined, impartial, and context-independent, aggregate welfare ranking with the property of finite fine-grainedness. The axioms of Arrehnius’ impossibility theorem (to which you allude) quantify over welfare levels with well-defined values.
If I reflect on my principles, I don’t find this aggregate welfare measure directly, nor do I see that it’s entailed by any of my other commitments. If I decide on one concrete world over another, I don’t take this to be grounded in a claim about aggregate welfare.
I don’t mean to say that I think there are no unambiguous cases where societies (worlds) are happier than others. Rather, I mean to say that granting some determinate welfare rankings over worlds doesn’t mean that I’m thereby committed to the existence of a well-defined, impartial welfare ranking over worlds in every context.
So: I think I have principles which endorse the claim: ‘Utopia > Lizards’, and I don’t think that leaves me endorsing some unfortunate preference about concrete states of affairs. In Utopia and Lizards, Z (to me) seems obviously worse than A+. In the original Mere Addition Paradox, it’s a bit trickier, because Parfit’s original presentation assumes the existence of ‘an’ aggregate welfare-level, which is meant to represent some (set of) concrete state of affairs. And I think more would need to be said in order to convince me that there’s some fact of the matter about which concrete situations instantiate Parfit’s puzzle.
How does this all relate to your initial defense of moral theorizing? In short, I think that moral theorizing can have benefits (which you suggest), but — from my current perspective — I feel as though moral theorizing can also impose an overly narrow picture of what a consistent moral self-conception must look like.