I found the “court hearing analogy” and the overall discussion of population ethics in terms of the anticipated complains/appeals/preferences of future people a bit confusing (because, as you point out, it’s not clear how it makes sense in light of the non-identity problem). In particular your tentative solution of talking about the interests of ‘interest groups’ seems like it’s kind of veering into the axiological territory that you wanted to avoid, no? As in: groups don’t literally have desires or preferences or goals or interests above and beyond the individuals that make them up. But we can’t compare across individuals here, so it’s not clear how we can meaningfully compare the interests of groups in this sense. So what are we comparing? Well, groups can be said to have different kinds of intrinsic value, and while that value could be manifested/realised/determined only by individuals, you can comfortably compare value across groups with different sets of individuals.
I don’t think my use of “interest groups” smuggles in the need for an objective axiology, but I agree that I should flesh out better how I think about this. (The way it’s currently in the text, it’s hard to interpret how I’m thinking of it.)
Let me create a sketch based on the simplest formulation of the non-identity problem (have child A now or delay pregnancy a few days to have a healthier, better off child B). The challenge is to explain why minimal morality would want us to wait. If we think of objections by newly created people/beings as tied to their personality after birth, then A cannot complain that they weren’t given full consideration of their interests. By contrast, if A and B envision themselves as an “interest group” behind some veil of ignorance of who is who after birth, they would agree that they prefer a procedure where it matters that the better off person is created.
Does that approach introduce axiology back in? In my view, not necessarily. But I can see that there’s a bit of a tension here (e.g., imaging not-yet-born people trying to agree with each other behind a veil of ignorance on demands to make from their potential creators could lead to them agreeing on some kind of axiology? But then again, it depends on their respective psychologies, on “who are the people we’re envisioning creating” The way I think about it, the potential new people would only reach agreement on demands in cases where one action is worse than another action on basically all defensible population-ethical positions. Accordingly, it’s only those no-go actions that minimal morality prohibits.) Therefore, I flagged the non-identity problem as an area of further exploration regarding its implications for my view.
I understand the difference in emphasis between saying that the moral significance of people’s well-being is derivative of its contribution to valuable states of affairs, as contrasted with saying that what makes states of affairs valuable just is people’s well-being (or something to that effect). But I’m curious what this means in a decision-relevant sense?
As I make clear in my replies to Richard Ngo, I’m not against the idea of having an axiology per se. I just claim that there’s no objective axiology. I understand the appeal of having a subjective axiology so I would perhaps even count myself as a “fan of axiology.” (For these reasons, I don’t necessarily disagree with the two examples you gave / your two points about axiology – though I’m not sure I understood the second bullet point. I’m not familiar with that particular concept by Bernard Williams.)
I can explain again the role of subjectivist axiologies in my framework:
Minimal morality doesn’t have an axiology – interests/goals matter for minimal morality, but minimal morality doesn’t tell us exactly what to do under all circumstances. It’s more like a set of constraints principles on what not to do (or things you have to take into account if you want to find a permissible way of doing something). However, minimal morality isn’t satisfying for people with high altruistic motivation. This is where the need for a subjective axiology comes in. And I’m saying that there are several different equally defensible options for subjective axiologies. What this means in practice is mostly just “To not be jerks, people with a subjective axiology like hedonism or tranquilism shouldn’t act like classical utilitarians or hedonist utilitarians in all circumstances. Instead, they should only follow their preferred morality in cases where minimal morality is silent – i.e., where minimal morality permits the implications of these single-minded moral frameworks.
As I say in the text:
Where people have well-specified interests/goals, it would be a preposterous conception of [ambitious (care-)morality] to stick someone into an experience machine against their will or kill them against their will to protect them from future suffering.
(My comment replies to Richard Ngo cover some more points along the same theme.)
I don’t think my use of “interest groups” smuggles in the need for an objective axiology, but I agree that I should flesh out better how I think about this. (The way it’s currently in the text, it’s hard to interpret how I’m thinking of it.)
Let me create a sketch based on the simplest formulation of the non-identity problem (have child A now or delay pregnancy a few days to have a healthier, better off child B). The challenge is to explain why minimal morality would want us to wait. If we think of objections by newly created people/beings as tied to their personality after birth, then A cannot complain that they weren’t given full consideration of their interests. By contrast, if A and B envision themselves as an “interest group” behind some veil of ignorance of who is who after birth, they would agree that they prefer a procedure where it matters that the better off person is created.
Does that approach introduce axiology back in? In my view, not necessarily. But I can see that there’s a bit of a tension here (e.g., imaging not-yet-born people trying to agree with each other behind a veil of ignorance on demands to make from their potential creators could lead to them agreeing on some kind of axiology? But then again, it depends on their respective psychologies, on “who are the people we’re envisioning creating” The way I think about it, the potential new people would only reach agreement on demands in cases where one action is worse than another action on basically all defensible population-ethical positions. Accordingly, it’s only those no-go actions that minimal morality prohibits.) Therefore, I flagged the non-identity problem as an area of further exploration regarding its implications for my view.
As I make clear in my replies to Richard Ngo, I’m not against the idea of having an axiology per se. I just claim that there’s no objective axiology. I understand the appeal of having a subjective axiology so I would perhaps even count myself as a “fan of axiology.” (For these reasons, I don’t necessarily disagree with the two examples you gave / your two points about axiology – though I’m not sure I understood the second bullet point. I’m not familiar with that particular concept by Bernard Williams.)
I can explain again the role of subjectivist axiologies in my framework:
Minimal morality doesn’t have an axiology – interests/goals matter for minimal morality, but minimal morality doesn’t tell us exactly what to do under all circumstances. It’s more like a set of constraints principles on what not to do (or things you have to take into account if you want to find a permissible way of doing something). However, minimal morality isn’t satisfying for people with high altruistic motivation. This is where the need for a subjective axiology comes in. And I’m saying that there are several different equally defensible options for subjective axiologies. What this means in practice is mostly just “To not be jerks, people with a subjective axiology like hedonism or tranquilism shouldn’t act like classical utilitarians or hedonist utilitarians in all circumstances. Instead, they should only follow their preferred morality in cases where minimal morality is silent – i.e., where minimal morality permits the implications of these single-minded moral frameworks.
As I say in the text:
(My comment replies to Richard Ngo cover some more points along the same theme.)