Am I right in thinking that in order to creatively duck things like the RP, pinprick argument, arguments against asymmetry (etc) you are rejecting that there is a meaningful “better than” relation between certain states of affairs in population ethics contexts? If so this seems somewhat implausible because there do seem to be some cases where one state of affairs is better than another, and views which say “sure, some comparisons are clear, but others are vague or subjective” seem complicated. Do you just need to opt out of the entire game of “some states of affairs are better than other states of affairs (discontinuous with our own world)”? Curious how you frame this in your own mind.
This description doesn’t quite resonate with me. Maybe it’s close, though. I would rephrase it as something like “We can often say that one outcome is meaningfully better than another outcome on some list of evaluation criteria, but there’s no objective list of evaluation criteria that everyone ought to follow. (But that’s just a rephrasing of the central point “there’s no objective axiology.”)
I want to emphasize that I agree that, e.g., there’s an important sense in which creating a merely medium-happy person is worse than creating a very happy person. My framework allows for this type of better-than relation even though my framework also says “person-affecting views are a defensible systematization of ambitious morality.” How are these two takes compatible? Here, I point out that person-affecting views aren’t about what’s best for newly created people/beings. Person-affecting views, the way I motivate them in my framework, would be about doing what’s best for existing and sure-to-exist people/beings. Sometimes, existing and sure-to-exist people/beings want to bring new people/beings into existence. In those cases, we need some guidance about things like whether it’s permissible to bring unhappy people into existence. My solution: minimal morality already has a few things to say about how not to create new people/beings. So, person-affecting views are essentially about maximally benefitting the interests/goals of existing people/beings without overstepping minimal morality when it comes to new people/beings.
The above explains how my view “creatively ducks” arguments against the asymmetry.
I wouldn’t necessarily say that my view ducks the repugnant conclusion – at least not in all instances! Mostly, my view avoids versions of the repugnant conclusion where the initial paradise-like population is already actual. This also means it blocks the very repugnant conclusion. By contrast, when it comes to a choice between colonizing a new galaxy with either a small paradise-like population or a very large population with people with lower but still positive life quality, my framework actually says “both of these are compatible with minimal morality.”
(Personally, I’ve always found the repugnant conclusion a lot less problematic when it plays out with entirely new people in a far-away galaxy.)
I found the “court hearing analogy” and the overall discussion of population ethics in terms of the anticipated complains/appeals/preferences of future people a bit confusing (because, as you point out, it’s not clear how it makes sense in light of the non-identity problem). In particular your tentative solution of talking about the interests of ‘interest groups’ seems like it’s kind of veering into the axiological territory that you wanted to avoid, no? As in: groups don’t literally have desires or preferences or goals or interests above and beyond the individuals that make them up. But we can’t compare across individuals here, so it’s not clear how we can meaningfully compare the interests of groups in this sense. So what are we comparing? Well, groups can be said to have different kinds of intrinsic value, and while that value could be manifested/realised/determined only by individuals, you can comfortably compare value across groups with different sets of individuals.
I don’t think my use of “interest groups” smuggles in the need for an objective axiology, but I agree that I should flesh out better how I think about this. (The way it’s currently in the text, it’s hard to interpret how I’m thinking of it.)
Let me create a sketch based on the simplest formulation of the non-identity problem (have child A now or delay pregnancy a few days to have a healthier, better off child B). The challenge is to explain why minimal morality would want us to wait. If we think of objections by newly created people/beings as tied to their personality after birth, then A cannot complain that they weren’t given full consideration of their interests. By contrast, if A and B envision themselves as an “interest group” behind some veil of ignorance of who is who after birth, they would agree that they prefer a procedure where it matters that the better off person is created.
Does that approach introduce axiology back in? In my view, not necessarily. But I can see that there’s a bit of a tension here (e.g., imaging not-yet-born people trying to agree with each other behind a veil of ignorance on demands to make from their potential creators could lead to them agreeing on some kind of axiology? But then again, it depends on their respective psychologies, on “who are the people we’re envisioning creating” The way I think about it, the potential new people would only reach agreement on demands in cases where one action is worse than another action on basically all defensible population-ethical positions. Accordingly, it’s only those no-go actions that minimal morality prohibits.) Therefore, I flagged the non-identity problem as an area of further exploration regarding its implications for my view.
I understand the difference in emphasis between saying that the moral significance of people’s well-being is derivative of its contribution to valuable states of affairs, as contrasted with saying that what makes states of affairs valuable just is people’s well-being (or something to that effect). But I’m curious what this means in a decision-relevant sense?
As I make clear in my replies to Richard Ngo, I’m not against the idea of having an axiology per se. I just claim that there’s no objective axiology. I understand the appeal of having a subjective axiology so I would perhaps even count myself as a “fan of axiology.” (For these reasons, I don’t necessarily disagree with the two examples you gave / your two points about axiology – though I’m not sure I understood the second bullet point. I’m not familiar with that particular concept by Bernard Williams.)
I can explain again the role of subjectivist axiologies in my framework:
Minimal morality doesn’t have an axiology – interests/goals matter for minimal morality, but minimal morality doesn’t tell us exactly what to do under all circumstances. It’s more like a set of constraints principles on what not to do (or things you have to take into account if you want to find a permissible way of doing something). However, minimal morality isn’t satisfying for people with high altruistic motivation. This is where the need for a subjective axiology comes in. And I’m saying that there are several different equally defensible options for subjective axiologies. What this means in practice is mostly just “To not be jerks, people with a subjective axiology like hedonism or tranquilism shouldn’t act like classical utilitarians or hedonist utilitarians in all circumstances. Instead, they should only follow their preferred morality in cases where minimal morality is silent – i.e., where minimal morality permits the implications of these single-minded moral frameworks.
As I say in the text:
Where people have well-specified interests/goals, it would be a preposterous conception of [ambitious (care-)morality] to stick someone into an experience machine against their will or kill them against their will to protect them from future suffering.
(My comment replies to Richard Ngo cover some more points along the same theme.)
This description doesn’t quite resonate with me. Maybe it’s close, though. I would rephrase it as something like “We can often say that one outcome is meaningfully better than another outcome on some list of evaluation criteria, but there’s no objective list of evaluation criteria that everyone ought to follow. (But that’s just a rephrasing of the central point “there’s no objective axiology.”)
I want to emphasize that I agree that, e.g., there’s an important sense in which creating a merely medium-happy person is worse than creating a very happy person. My framework allows for this type of better-than relation even though my framework also says “person-affecting views are a defensible systematization of ambitious morality.” How are these two takes compatible? Here, I point out that person-affecting views aren’t about what’s best for newly created people/beings. Person-affecting views, the way I motivate them in my framework, would be about doing what’s best for existing and sure-to-exist people/beings. Sometimes, existing and sure-to-exist people/beings want to bring new people/beings into existence. In those cases, we need some guidance about things like whether it’s permissible to bring unhappy people into existence. My solution: minimal morality already has a few things to say about how not to create new people/beings. So, person-affecting views are essentially about maximally benefitting the interests/goals of existing people/beings without overstepping minimal morality when it comes to new people/beings.
The above explains how my view “creatively ducks” arguments against the asymmetry.
I wouldn’t necessarily say that my view ducks the repugnant conclusion – at least not in all instances! Mostly, my view avoids versions of the repugnant conclusion where the initial paradise-like population is already actual. This also means it blocks the very repugnant conclusion. By contrast, when it comes to a choice between colonizing a new galaxy with either a small paradise-like population or a very large population with people with lower but still positive life quality, my framework actually says “both of these are compatible with minimal morality.”
(Personally, I’ve always found the repugnant conclusion a lot less problematic when it plays out with entirely new people in a far-away galaxy.)
I don’t think my use of “interest groups” smuggles in the need for an objective axiology, but I agree that I should flesh out better how I think about this. (The way it’s currently in the text, it’s hard to interpret how I’m thinking of it.)
Let me create a sketch based on the simplest formulation of the non-identity problem (have child A now or delay pregnancy a few days to have a healthier, better off child B). The challenge is to explain why minimal morality would want us to wait. If we think of objections by newly created people/beings as tied to their personality after birth, then A cannot complain that they weren’t given full consideration of their interests. By contrast, if A and B envision themselves as an “interest group” behind some veil of ignorance of who is who after birth, they would agree that they prefer a procedure where it matters that the better off person is created.
Does that approach introduce axiology back in? In my view, not necessarily. But I can see that there’s a bit of a tension here (e.g., imaging not-yet-born people trying to agree with each other behind a veil of ignorance on demands to make from their potential creators could lead to them agreeing on some kind of axiology? But then again, it depends on their respective psychologies, on “who are the people we’re envisioning creating” The way I think about it, the potential new people would only reach agreement on demands in cases where one action is worse than another action on basically all defensible population-ethical positions. Accordingly, it’s only those no-go actions that minimal morality prohibits.) Therefore, I flagged the non-identity problem as an area of further exploration regarding its implications for my view.
As I make clear in my replies to Richard Ngo, I’m not against the idea of having an axiology per se. I just claim that there’s no objective axiology. I understand the appeal of having a subjective axiology so I would perhaps even count myself as a “fan of axiology.” (For these reasons, I don’t necessarily disagree with the two examples you gave / your two points about axiology – though I’m not sure I understood the second bullet point. I’m not familiar with that particular concept by Bernard Williams.)
I can explain again the role of subjectivist axiologies in my framework:
Minimal morality doesn’t have an axiology – interests/goals matter for minimal morality, but minimal morality doesn’t tell us exactly what to do under all circumstances. It’s more like a set of constraints principles on what not to do (or things you have to take into account if you want to find a permissible way of doing something). However, minimal morality isn’t satisfying for people with high altruistic motivation. This is where the need for a subjective axiology comes in. And I’m saying that there are several different equally defensible options for subjective axiologies. What this means in practice is mostly just “To not be jerks, people with a subjective axiology like hedonism or tranquilism shouldn’t act like classical utilitarians or hedonist utilitarians in all circumstances. Instead, they should only follow their preferred morality in cases where minimal morality is silent – i.e., where minimal morality permits the implications of these single-minded moral frameworks.
As I say in the text:
(My comment replies to Richard Ngo cover some more points along the same theme.)