You say The exact reach of minimal morality is fuzzy/under-defined. How much is entailed by “don’t be a jerk?”. This seems important. For instance, you might see ‘drowning child’ framings as (compellling) efforts to move charitable giving within the purview of “you’re a jerk if you don’t do this when you comfortably could.” Especially given the size of the stakes, could you imagine certain longtermist causes like “protecting future generations” similarly being framed as a component of minimal morality?
Yes, I do see the drowning child thought experiment as an example where minimal morality applies!
Regarding “protecting future generations as a component of minimal morality:”
My framework could maybe be adapted to incorporate this, but I suspect it would be difficult to make a coherent version of the framework where the reason we’d (fully/always) count newly created future generations (and “cooperating through time” framings) don’t somehow re-introduce the assumption “something has intrinsic value.” I’d say the most central, most unalterable building blocks of my framework are “don’t use ‘intrinsic value’ (or related concepts) in your framing of the option space” and “think about ethics (at least partly) from the perspective of interests/goals.” So, to argue that minimal morality includes protecting our ability to bring future generations into existence (and actually doing this) regardless of present generations’ concerns about this, you’d have to explain why it’s indefensible/being a jerk to prioritize existing people over people who could exist. The relevant arguments I brought up against this are this section, which includes endnote 21 for my main argument. I’ll quote them here:
Arguably, [minimal morality] also contains a procreation asymmetry for the more substantial reason that creating a specific person singles them out from the sea of all possible people/beings in a way that “not creating them” does not.[21]
And here the endnote:
If I fail to create a happy life, I’m acting suboptimally towards the subset of possible people who’d wish to be in that spot – but I’m not necessarily doing anything to highlight that particular subset. (Other possible people who wouldn’t mind non-existence and others yet would want to be created, but only under more specific conditions/circumstances.) By contrast, when I make a person who wishes they had never been born, I singled out that particular person in the most real sense. If I could foresee that they would be unhappy, the excuse “Some other possible minds wouldn’t be unhappy in your shoes” isn’t defensible. ↩︎
A key ingredient to my argument is that there’s no “universal psychology” that makes all possible people have the same interests/goals or the same way of thinking about existence vs. non-existence. Therefore, we can’t say “being born into a happy life is good for anyone.” At best, we could say “being born into a happy life is good for the sort of person who would find themselves grateful for it and would start to argue for totalist population ethics once they’re alive.” This begs the question: What about happy people who develop a different view on population ethics?
I develop this theme in a bunch of places throughout the article, for instance in places where I comment on the specific ways interests/goals-based ethics seem under-defined:
(1) Someone has under-defined interests/goals.
(2) It’s under-defined how many people/beings with interests/goals there will be.
(3) It’s under-defined which interests/goals a new person will have.
Point (3) in particular is sometimes under-appreciated. Without an objective axiology, I don’t think we can generalize about what’s good for newly created people/beings – there’s always the question “Which ones??”
Accordingly, there (IMO) seems to be an asymmetry here related to how creating a particular person singles out that particular person’s psychology in a way that not creating anyone does not. When you create a particular person, you better make sure that this particular person doesn’t object to what you did.
(You could argue that we just have to create happy people who will be grateful for their existence – but that would still feel a bit arbitrary in the sense that you’re singling out a particular type of psychology (why focus on people with the potential for gratefulness to exist?), and it would imply things like “creating a happy Buddhist monk has no moral value, but creating a happy life-hungry enterprenuer or explorer has great moral value.” In the other direction, you could challenge the basis for my asymmetry by calling into question whether only looking at a new mind’s self-assessment about their existence is too weak to prevent bad things. You could ask “What if we created a mind that doesn’t mind being in misery? Would it be permissible to engineer slaves who don’t mind working hard under miserable conditions?” In reply to that, I’d point out how even if the mind ranks death after being born as worse than anything else, that doesn’t make it okay to bring such a conflicted being into existence. The particular mind in question wouldn’t object to what you did, but nowhere in your decision to create that particular mind did you show any concern for newly created people/beings – otherwise you’d have created minds that don’t let you exploit them maximally and don’t have the type of psychology that puts them into internally conflicted states like “ARRRGHH PAIN ARRRGHH PAIN ARRRGHH PAIN, but I have to keep existing, have to keep going!!!” You’d only ever create that particular type of mind if you wanted to get away with not having to care about the mind’s well-being, and this isn’t a defensible motive under minimal morality.)
At this point, I want to emphasize that the main appeal of minimal morality is that it’s really uncontroversial. Whether potential people count the same as existing and sure-to-exist people is quite a controversial issue. My framework doesn’t say “possible people don’t count.” It only says that it’s wrong to think everyone has to care about potential happy future people.
All that said, the fact that EAs exist who stake most or even all of their caring budget on helping future generations come into a flourishing existence is an indirect reason why minimal morality may include caring for future generations! So, minimal morality takes this detour – which you might find a counterintuitive reason to care about the future, but nonetheless – where one reason people should care about future generations (in low-demanding ways) is because many other existing people care strongly and sincerely about there being future generations.
Am I right in thinking that in order to creatively duck things like the RP, pinprick argument, arguments against asymmetry (etc) you are rejecting that there is a meaningful “better than” relation between certain states of affairs in population ethics contexts? If so this seems somewhat implausible because there do seem to be some cases where one state of affairs is better than another, and views which say “sure, some comparisons are clear, but others are vague or subjective” seem complicated. Do you just need to opt out of the entire game of “some states of affairs are better than other states of affairs (discontinuous with our own world)”? Curious how you frame this in your own mind.
This description doesn’t quite resonate with me. Maybe it’s close, though. I would rephrase it as something like “We can often say that one outcome is meaningfully better than another outcome on some list of evaluation criteria, but there’s no objective list of evaluation criteria that everyone ought to follow. (But that’s just a rephrasing of the central point “there’s no objective axiology.”)
I want to emphasize that I agree that, e.g., there’s an important sense in which creating a merely medium-happy person is worse than creating a very happy person. My framework allows for this type of better-than relation even though my framework also says “person-affecting views are a defensible systematization of ambitious morality.” How are these two takes compatible? Here, I point out that person-affecting views aren’t about what’s best for newly created people/beings. Person-affecting views, the way I motivate them in my framework, would be about doing what’s best for existing and sure-to-exist people/beings. Sometimes, existing and sure-to-exist people/beings want to bring new people/beings into existence. In those cases, we need some guidance about things like whether it’s permissible to bring unhappy people into existence. My solution: minimal morality already has a few things to say about how not to create new people/beings. So, person-affecting views are essentially about maximally benefitting the interests/goals of existing people/beings without overstepping minimal morality when it comes to new people/beings.
The above explains how my view “creatively ducks” arguments against the asymmetry.
I wouldn’t necessarily say that my view ducks the repugnant conclusion – at least not in all instances! Mostly, my view avoids versions of the repugnant conclusion where the initial paradise-like population is already actual. This also means it blocks the very repugnant conclusion. By contrast, when it comes to a choice between colonizing a new galaxy with either a small paradise-like population or a very large population with people with lower but still positive life quality, my framework actually says “both of these are compatible with minimal morality.”
(Personally, I’ve always found the repugnant conclusion a lot less problematic when it plays out with entirely new people in a far-away galaxy.)
I found the “court hearing analogy” and the overall discussion of population ethics in terms of the anticipated complains/appeals/preferences of future people a bit confusing (because, as you point out, it’s not clear how it makes sense in light of the non-identity problem). In particular your tentative solution of talking about the interests of ‘interest groups’ seems like it’s kind of veering into the axiological territory that you wanted to avoid, no? As in: groups don’t literally have desires or preferences or goals or interests above and beyond the individuals that make them up. But we can’t compare across individuals here, so it’s not clear how we can meaningfully compare the interests of groups in this sense. So what are we comparing? Well, groups can be said to have different kinds of intrinsic value, and while that value could be manifested/realised/determined only by individuals, you can comfortably compare value across groups with different sets of individuals.
I don’t think my use of “interest groups” smuggles in the need for an objective axiology, but I agree that I should flesh out better how I think about this. (The way it’s currently in the text, it’s hard to interpret how I’m thinking of it.)
Let me create a sketch based on the simplest formulation of the non-identity problem (have child A now or delay pregnancy a few days to have a healthier, better off child B). The challenge is to explain why minimal morality would want us to wait. If we think of objections by newly created people/beings as tied to their personality after birth, then A cannot complain that they weren’t given full consideration of their interests. By contrast, if A and B envision themselves as an “interest group” behind some veil of ignorance of who is who after birth, they would agree that they prefer a procedure where it matters that the better off person is created.
Does that approach introduce axiology back in? In my view, not necessarily. But I can see that there’s a bit of a tension here (e.g., imaging not-yet-born people trying to agree with each other behind a veil of ignorance on demands to make from their potential creators could lead to them agreeing on some kind of axiology? But then again, it depends on their respective psychologies, on “who are the people we’re envisioning creating” The way I think about it, the potential new people would only reach agreement on demands in cases where one action is worse than another action on basically all defensible population-ethical positions. Accordingly, it’s only those no-go actions that minimal morality prohibits.) Therefore, I flagged the non-identity problem as an area of further exploration regarding its implications for my view.
I understand the difference in emphasis between saying that the moral significance of people’s well-being is derivative of its contribution to valuable states of affairs, as contrasted with saying that what makes states of affairs valuable just is people’s well-being (or something to that effect). But I’m curious what this means in a decision-relevant sense?
As I make clear in my replies to Richard Ngo, I’m not against the idea of having an axiology per se. I just claim that there’s no objective axiology. I understand the appeal of having a subjective axiology so I would perhaps even count myself as a “fan of axiology.” (For these reasons, I don’t necessarily disagree with the two examples you gave / your two points about axiology – though I’m not sure I understood the second bullet point. I’m not familiar with that particular concept by Bernard Williams.)
I can explain again the role of subjectivist axiologies in my framework:
Minimal morality doesn’t have an axiology – interests/goals matter for minimal morality, but minimal morality doesn’t tell us exactly what to do under all circumstances. It’s more like a set of constraints principles on what not to do (or things you have to take into account if you want to find a permissible way of doing something). However, minimal morality isn’t satisfying for people with high altruistic motivation. This is where the need for a subjective axiology comes in. And I’m saying that there are several different equally defensible options for subjective axiologies. What this means in practice is mostly just “To not be jerks, people with a subjective axiology like hedonism or tranquilism shouldn’t act like classical utilitarians or hedonist utilitarians in all circumstances. Instead, they should only follow their preferred morality in cases where minimal morality is silent – i.e., where minimal morality permits the implications of these single-minded moral frameworks.
As I say in the text:
Where people have well-specified interests/goals, it would be a preposterous conception of [ambitious (care-)morality] to stick someone into an experience machine against their will or kill them against their will to protect them from future suffering.
(My comment replies to Richard Ngo cover some more points along the same theme.)
Yes, I do see the drowning child thought experiment as an example where minimal morality applies!
Regarding “protecting future generations as a component of minimal morality:”
My framework could maybe be adapted to incorporate this, but I suspect it would be difficult to make a coherent version of the framework where the reason we’d (fully/always) count newly created future generations (and “cooperating through time” framings) don’t somehow re-introduce the assumption “something has intrinsic value.” I’d say the most central, most unalterable building blocks of my framework are “don’t use ‘intrinsic value’ (or related concepts) in your framing of the option space” and “think about ethics (at least partly) from the perspective of interests/goals.” So, to argue that minimal morality includes protecting our ability to bring future generations into existence (and actually doing this) regardless of present generations’ concerns about this, you’d have to explain why it’s indefensible/being a jerk to prioritize existing people over people who could exist. The relevant arguments I brought up against this are this section, which includes endnote 21 for my main argument. I’ll quote them here:
And here the endnote:
A key ingredient to my argument is that there’s no “universal psychology” that makes all possible people have the same interests/goals or the same way of thinking about existence vs. non-existence. Therefore, we can’t say “being born into a happy life is good for anyone.” At best, we could say “being born into a happy life is good for the sort of person who would find themselves grateful for it and would start to argue for totalist population ethics once they’re alive.” This begs the question: What about happy people who develop a different view on population ethics?
I develop this theme in a bunch of places throughout the article, for instance in places where I comment on the specific ways interests/goals-based ethics seem under-defined:
Point (3) in particular is sometimes under-appreciated. Without an objective axiology, I don’t think we can generalize about what’s good for newly created people/beings – there’s always the question “Which ones??”
Accordingly, there (IMO) seems to be an asymmetry here related to how creating a particular person singles out that particular person’s psychology in a way that not creating anyone does not. When you create a particular person, you better make sure that this particular person doesn’t object to what you did.
(You could argue that we just have to create happy people who will be grateful for their existence – but that would still feel a bit arbitrary in the sense that you’re singling out a particular type of psychology (why focus on people with the potential for gratefulness to exist?), and it would imply things like “creating a happy Buddhist monk has no moral value, but creating a happy life-hungry enterprenuer or explorer has great moral value.” In the other direction, you could challenge the basis for my asymmetry by calling into question whether only looking at a new mind’s self-assessment about their existence is too weak to prevent bad things. You could ask “What if we created a mind that doesn’t mind being in misery? Would it be permissible to engineer slaves who don’t mind working hard under miserable conditions?” In reply to that, I’d point out how even if the mind ranks death after being born as worse than anything else, that doesn’t make it okay to bring such a conflicted being into existence. The particular mind in question wouldn’t object to what you did, but nowhere in your decision to create that particular mind did you show any concern for newly created people/beings – otherwise you’d have created minds that don’t let you exploit them maximally and don’t have the type of psychology that puts them into internally conflicted states like “ARRRGHH PAIN ARRRGHH PAIN ARRRGHH PAIN, but I have to keep existing, have to keep going!!!” You’d only ever create that particular type of mind if you wanted to get away with not having to care about the mind’s well-being, and this isn’t a defensible motive under minimal morality.)
At this point, I want to emphasize that the main appeal of minimal morality is that it’s really uncontroversial. Whether potential people count the same as existing and sure-to-exist people is quite a controversial issue. My framework doesn’t say “possible people don’t count.” It only says that it’s wrong to think everyone has to care about potential happy future people.
All that said, the fact that EAs exist who stake most or even all of their caring budget on helping future generations come into a flourishing existence is an indirect reason why minimal morality may include caring for future generations! So, minimal morality takes this detour – which you might find a counterintuitive reason to care about the future, but nonetheless – where one reason people should care about future generations (in low-demanding ways) is because many other existing people care strongly and sincerely about there being future generations.
This description doesn’t quite resonate with me. Maybe it’s close, though. I would rephrase it as something like “We can often say that one outcome is meaningfully better than another outcome on some list of evaluation criteria, but there’s no objective list of evaluation criteria that everyone ought to follow. (But that’s just a rephrasing of the central point “there’s no objective axiology.”)
I want to emphasize that I agree that, e.g., there’s an important sense in which creating a merely medium-happy person is worse than creating a very happy person. My framework allows for this type of better-than relation even though my framework also says “person-affecting views are a defensible systematization of ambitious morality.” How are these two takes compatible? Here, I point out that person-affecting views aren’t about what’s best for newly created people/beings. Person-affecting views, the way I motivate them in my framework, would be about doing what’s best for existing and sure-to-exist people/beings. Sometimes, existing and sure-to-exist people/beings want to bring new people/beings into existence. In those cases, we need some guidance about things like whether it’s permissible to bring unhappy people into existence. My solution: minimal morality already has a few things to say about how not to create new people/beings. So, person-affecting views are essentially about maximally benefitting the interests/goals of existing people/beings without overstepping minimal morality when it comes to new people/beings.
The above explains how my view “creatively ducks” arguments against the asymmetry.
I wouldn’t necessarily say that my view ducks the repugnant conclusion – at least not in all instances! Mostly, my view avoids versions of the repugnant conclusion where the initial paradise-like population is already actual. This also means it blocks the very repugnant conclusion. By contrast, when it comes to a choice between colonizing a new galaxy with either a small paradise-like population or a very large population with people with lower but still positive life quality, my framework actually says “both of these are compatible with minimal morality.”
(Personally, I’ve always found the repugnant conclusion a lot less problematic when it plays out with entirely new people in a far-away galaxy.)
I don’t think my use of “interest groups” smuggles in the need for an objective axiology, but I agree that I should flesh out better how I think about this. (The way it’s currently in the text, it’s hard to interpret how I’m thinking of it.)
Let me create a sketch based on the simplest formulation of the non-identity problem (have child A now or delay pregnancy a few days to have a healthier, better off child B). The challenge is to explain why minimal morality would want us to wait. If we think of objections by newly created people/beings as tied to their personality after birth, then A cannot complain that they weren’t given full consideration of their interests. By contrast, if A and B envision themselves as an “interest group” behind some veil of ignorance of who is who after birth, they would agree that they prefer a procedure where it matters that the better off person is created.
Does that approach introduce axiology back in? In my view, not necessarily. But I can see that there’s a bit of a tension here (e.g., imaging not-yet-born people trying to agree with each other behind a veil of ignorance on demands to make from their potential creators could lead to them agreeing on some kind of axiology? But then again, it depends on their respective psychologies, on “who are the people we’re envisioning creating” The way I think about it, the potential new people would only reach agreement on demands in cases where one action is worse than another action on basically all defensible population-ethical positions. Accordingly, it’s only those no-go actions that minimal morality prohibits.) Therefore, I flagged the non-identity problem as an area of further exploration regarding its implications for my view.
As I make clear in my replies to Richard Ngo, I’m not against the idea of having an axiology per se. I just claim that there’s no objective axiology. I understand the appeal of having a subjective axiology so I would perhaps even count myself as a “fan of axiology.” (For these reasons, I don’t necessarily disagree with the two examples you gave / your two points about axiology – though I’m not sure I understood the second bullet point. I’m not familiar with that particular concept by Bernard Williams.)
I can explain again the role of subjectivist axiologies in my framework:
Minimal morality doesn’t have an axiology – interests/goals matter for minimal morality, but minimal morality doesn’t tell us exactly what to do under all circumstances. It’s more like a set of constraints principles on what not to do (or things you have to take into account if you want to find a permissible way of doing something). However, minimal morality isn’t satisfying for people with high altruistic motivation. This is where the need for a subjective axiology comes in. And I’m saying that there are several different equally defensible options for subjective axiologies. What this means in practice is mostly just “To not be jerks, people with a subjective axiology like hedonism or tranquilism shouldn’t act like classical utilitarians or hedonist utilitarians in all circumstances. Instead, they should only follow their preferred morality in cases where minimal morality is silent – i.e., where minimal morality permits the implications of these single-minded moral frameworks.
As I say in the text:
(My comment replies to Richard Ngo cover some more points along the same theme.)