Thanks for writing this — I’m curious about approaches like this, and your post felt unusually comprehensive. I also don’t yet feel like I could faithfully represent your view to someone else, possibly because I read this fairly quickly.
Some scattered thoughts / questions below, written in a rush. I expect some or many of them are fairly confused! NNTR.
On this framework, on what grounds can someone not “defensibly ignore” another’s complaint? Am I right in thinking this is because ignoring some complaints means frustrating others’ goals or preferences, and not frustrating others’ goals or preferences is indefensible, as long as we care about getting along/cooperating at all (minimal morality)?
You say The exact reach of minimal morality is fuzzy/under-defined. How much is entailed by “don’t be a jerk?”. This seems important. For instance, you might see ‘drowning child’ framings as (compellling) efforts to move charitable giving within the purview of “you’re a jerk if you don’t do this when you comfortably could.” Especially given the size of the stakes, could you imagine certain longtermist causes like “protecting future generations” similarly being framed as a component of minimal morality?
One speculative way you could do this: you described ‘minimal morality’ as “contractualist” or “cooperation-focused” in spirit. Certainly some acts seem wrong because they just massively undermine the potential for many people living at the same time with many different goals to cooperate on whatever their goals are. But maybe there are some ways in which we collaborate/cooperate/make contracts across (large stretches of) time. Maybe this could ground obligations to future people in minimal morality terms.
I understand the difference in emphasis between saying that the moral significance of people’s well-being is derivative of its contribution to valuable states of affairs, as contrasted with saying that what makes states of affairs valuable just is people’s well-being (or something to that effect). But I’m curious what this means in a decision-relevant sense?
Here’s an analogy: my daily walk isn’t important because it increases the counter on my podometer; rather the counter matters because it says something about how much I’ve walked (and walking is the thing I really care about). To see this, consider that intervening on the counter without actually walking does not matter at all.
But unlike this analogy, fans of axiology might say that “the value of a state of affairs” is not a measure of what matters (actual people and their well-being) that can be manipulated independently of those things; rather it is defined in terms of what you say actually matters, so there is no substantial disagreement beyond one of emphasis (this is why I don’t think I’m on board with ‘further thought’ complaints against aggregative consequentialism). Curious what I’m missing here, though I realise this is maybe also a distraction.
I found the “court hearing analogy” and the overall discussion of population ethics in terms of the anticipated complains/appeals/preferences of future people a bit confusing (because, as you point out, it’s not clear how it makes sense in light of the non-identity problem). In particular your tentative solution of talking about the interests of ‘interest groups’ seems like it’s kind of veering into the axiological territory that you wanted to avoid, no? As in: groups don’t literally have desires or preferences or goals or interests above and beyond the individuals that make them up. But we can’t compare across individuals here, so it’s not clear how we can meaningfully compare the interests of groups in this sense. So what are we comparing? Well, groups can be said to have different kinds of intrinsic value, and while that value could be manifested/realised/determined only by individuals, you can comfortably compare value across groups with different sets of individuals.
Am I right in thinking that in order to creatively duck things like the RP, pinprick argument, arguments against asymmetry (etc) you are rejecting that there is a meaningful “better than” relation between certain states of affairs in population ethics contexts? If so this seems somewhat implausible because there do seem to be some cases where one state of affairs is better than another, and views which say “sure, some comparisons are clear, but others are vague or subjective” seem complicated. Do you just need to opt out of the entire game of “some states of affairs are better than other states of affairs (discontinuous with our own world)”? Curious how you frame this in your own mind.
I had an overall sense that you are both explaining the broad themes of an alternative to populaiton ethics grounded in axiology; and then building your own richer view on top of that (with the court hearing analogy, distinction between minimal and ambitious morality, etc), such that your own view is like a plausible instance of this broad family of alternatives, but doesn’t obviously follow from the original motivation for an alternative? Is that roughly right?
I also had a sense that you could have written a similar post just focused on simpler kinds of aggregative consequentialism (maybe you have in other posts, afraid I haven’t read them all); in some sense you picked an especially ambitious challenge in (i) developing a perspective on ethics that can be applied broadly; and then (ii) applying it to an especially complex part of ethics. So double props I guess!
Thanks for these questions! Your descriptions capture what I meant in most bullet points, but there are some areas where I think I failed to communicate some features of my position.
I’ll reply to your points in a different order than you made them (because that makes a few things easier). I’ll also make several comments in a thread rather than replying to everything at once
I had an overall sense that you are both explaining the broad themes of an alternative to population ethics grounded in axiology; and then building your own richer view on top of that (with the court hearing analogy, distinction between minimal and ambitious morality, etc), such that your own view is like a plausible instance of this broad family of alternatives, but doesn’t obviously follow from the original motivation for an alternative? Is that roughly right?
That’s right! I’m not particularly attached to the details of the court hearing analogy, for instance. By contrast, the distinction between minimal morality and ambitious morality feels quite central to my framework. I wouldn’t know how to motivate person-affecting views without it. Better developing and explaining my intuition “person-affecting views are more palatable than many people seem to give them credit for” was one of the key motivations I had in writing the post.
(However, like I say in my post’s introduction and the summary, my framework is compatible with subjectivist totalism – if someone wants to dedicate their life toward an ambitious morality of classical total utilitarianism and cooperate with people with other goals in the style of minimal morality, that works perfectly well within the framework [and is even compatible with all the details I suggested for how I would flesh out and apply the framework].)
I also had a sense that you could have written a similar post just focused on simpler kinds of aggregative consequentialism (maybe you have in other posts, afraid I haven’t read them all); in some sense you picked an especially ambitious challenge in (i) developing a perspective on ethics that can be applied broadly; and then (ii) applying it to an especially complex part of ethics. So double props I guess!
Yeah. I think the distinction between minimal morality and ambitious morality could have been a standalone post. For what it’s worth, my impression is that many moral anti-realists in EA already internalized something like this distinction. That is, even anti-realists who already know what to value (as opposed to feeling very uncertain and deferring the point where they form convictions to a time after more moral reflection or to the output of a hypothetical “reflection procedure”) tend to respect the fact that others have different goals. I don’t think that’s just because they think they are cooperating with aliens. Instead, as anti-realists, they are perfectly aware that their way of looking at morality isn’t the only one, so they understand they’d need to be jerks in some sense to disrespect others’ goals or moral convictions.
In any case, explaining this distinction took up some space. Then, I added examples and discussions of population ethics issues because I thought a good way to explain the framework is by showing how it handles some of the dilemma cases people are already familiar with.
On this framework, on what grounds can someone not “defensibly ignore” another’s complaint? Am I right in thinking this is because ignoring some complaints means frustrating others’ goals or preferences, and not frustrating others’ goals or preferences is indefensible, as long as we care about getting along/cooperating at all (minimal morality)?
(Probably you meant to say “and [] frustrating others’ goals or preferences is indefensible”?)
Yes, that’s what it’s about on a first pass. Other things that matter:
The lesser of several evils is always defensible.
If it would be quite demanding to avoid thwarting someone’s interests/goals, then thwarting is defensible. [Minimal morality is low-demanding.]
You say The exact reach of minimal morality is fuzzy/under-defined. How much is entailed by “don’t be a jerk?”. This seems important. For instance, you might see ‘drowning child’ framings as (compellling) efforts to move charitable giving within the purview of “you’re a jerk if you don’t do this when you comfortably could.” Especially given the size of the stakes, could you imagine certain longtermist causes like “protecting future generations” similarly being framed as a component of minimal morality?
Yes, I do see the drowning child thought experiment as an example where minimal morality applies!
Regarding “protecting future generations as a component of minimal morality:”
My framework could maybe be adapted to incorporate this, but I suspect it would be difficult to make a coherent version of the framework where the reason we’d (fully/always) count newly created future generations (and “cooperating through time” framings) don’t somehow re-introduce the assumption “something has intrinsic value.” I’d say the most central, most unalterable building blocks of my framework are “don’t use ‘intrinsic value’ (or related concepts) in your framing of the option space” and “think about ethics (at least partly) from the perspective of interests/goals.” So, to argue that minimal morality includes protecting our ability to bring future generations into existence (and actually doing this) regardless of present generations’ concerns about this, you’d have to explain why it’s indefensible/being a jerk to prioritize existing people over people who could exist. The relevant arguments I brought up against this are this section, which includes endnote 21 for my main argument. I’ll quote them here:
Arguably, [minimal morality] also contains a procreation asymmetry for the more substantial reason that creating a specific person singles them out from the sea of all possible people/beings in a way that “not creating them” does not.[21]
And here the endnote:
If I fail to create a happy life, I’m acting suboptimally towards the subset of possible people who’d wish to be in that spot – but I’m not necessarily doing anything to highlight that particular subset. (Other possible people who wouldn’t mind non-existence and others yet would want to be created, but only under more specific conditions/circumstances.) By contrast, when I make a person who wishes they had never been born, I singled out that particular person in the most real sense. If I could foresee that they would be unhappy, the excuse “Some other possible minds wouldn’t be unhappy in your shoes” isn’t defensible. ↩︎
A key ingredient to my argument is that there’s no “universal psychology” that makes all possible people have the same interests/goals or the same way of thinking about existence vs. non-existence. Therefore, we can’t say “being born into a happy life is good for anyone.” At best, we could say “being born into a happy life is good for the sort of person who would find themselves grateful for it and would start to argue for totalist population ethics once they’re alive.” This begs the question: What about happy people who develop a different view on population ethics?
I develop this theme in a bunch of places throughout the article, for instance in places where I comment on the specific ways interests/goals-based ethics seem under-defined:
(1) Someone has under-defined interests/goals.
(2) It’s under-defined how many people/beings with interests/goals there will be.
(3) It’s under-defined which interests/goals a new person will have.
Point (3) in particular is sometimes under-appreciated. Without an objective axiology, I don’t think we can generalize about what’s good for newly created people/beings – there’s always the question “Which ones??”
Accordingly, there (IMO) seems to be an asymmetry here related to how creating a particular person singles out that particular person’s psychology in a way that not creating anyone does not. When you create a particular person, you better make sure that this particular person doesn’t object to what you did.
(You could argue that we just have to create happy people who will be grateful for their existence – but that would still feel a bit arbitrary in the sense that you’re singling out a particular type of psychology (why focus on people with the potential for gratefulness to exist?), and it would imply things like “creating a happy Buddhist monk has no moral value, but creating a happy life-hungry enterprenuer or explorer has great moral value.” In the other direction, you could challenge the basis for my asymmetry by calling into question whether only looking at a new mind’s self-assessment about their existence is too weak to prevent bad things. You could ask “What if we created a mind that doesn’t mind being in misery? Would it be permissible to engineer slaves who don’t mind working hard under miserable conditions?” In reply to that, I’d point out how even if the mind ranks death after being born as worse than anything else, that doesn’t make it okay to bring such a conflicted being into existence. The particular mind in question wouldn’t object to what you did, but nowhere in your decision to create that particular mind did you show any concern for newly created people/beings – otherwise you’d have created minds that don’t let you exploit them maximally and don’t have the type of psychology that puts them into internally conflicted states like “ARRRGHH PAIN ARRRGHH PAIN ARRRGHH PAIN, but I have to keep existing, have to keep going!!!” You’d only ever create that particular type of mind if you wanted to get away with not having to care about the mind’s well-being, and this isn’t a defensible motive under minimal morality.)
At this point, I want to emphasize that the main appeal of minimal morality is that it’s really uncontroversial. Whether potential people count the same as existing and sure-to-exist people is quite a controversial issue. My framework doesn’t say “possible people don’t count.” It only says that it’s wrong to think everyone has to care about potential happy future people.
All that said, the fact that EAs exist who stake most or even all of their caring budget on helping future generations come into a flourishing existence is an indirect reason why minimal morality may include caring for future generations! So, minimal morality takes this detour – which you might find a counterintuitive reason to care about the future, but nonetheless – where one reason people should care about future generations (in low-demanding ways) is because many other existing people care strongly and sincerely about there being future generations.
Am I right in thinking that in order to creatively duck things like the RP, pinprick argument, arguments against asymmetry (etc) you are rejecting that there is a meaningful “better than” relation between certain states of affairs in population ethics contexts? If so this seems somewhat implausible because there do seem to be some cases where one state of affairs is better than another, and views which say “sure, some comparisons are clear, but others are vague or subjective” seem complicated. Do you just need to opt out of the entire game of “some states of affairs are better than other states of affairs (discontinuous with our own world)”? Curious how you frame this in your own mind.
This description doesn’t quite resonate with me. Maybe it’s close, though. I would rephrase it as something like “We can often say that one outcome is meaningfully better than another outcome on some list of evaluation criteria, but there’s no objective list of evaluation criteria that everyone ought to follow. (But that’s just a rephrasing of the central point “there’s no objective axiology.”)
I want to emphasize that I agree that, e.g., there’s an important sense in which creating a merely medium-happy person is worse than creating a very happy person. My framework allows for this type of better-than relation even though my framework also says “person-affecting views are a defensible systematization of ambitious morality.” How are these two takes compatible? Here, I point out that person-affecting views aren’t about what’s best for newly created people/beings. Person-affecting views, the way I motivate them in my framework, would be about doing what’s best for existing and sure-to-exist people/beings. Sometimes, existing and sure-to-exist people/beings want to bring new people/beings into existence. In those cases, we need some guidance about things like whether it’s permissible to bring unhappy people into existence. My solution: minimal morality already has a few things to say about how not to create new people/beings. So, person-affecting views are essentially about maximally benefitting the interests/goals of existing people/beings without overstepping minimal morality when it comes to new people/beings.
The above explains how my view “creatively ducks” arguments against the asymmetry.
I wouldn’t necessarily say that my view ducks the repugnant conclusion – at least not in all instances! Mostly, my view avoids versions of the repugnant conclusion where the initial paradise-like population is already actual. This also means it blocks the very repugnant conclusion. By contrast, when it comes to a choice between colonizing a new galaxy with either a small paradise-like population or a very large population with people with lower but still positive life quality, my framework actually says “both of these are compatible with minimal morality.”
(Personally, I’ve always found the repugnant conclusion a lot less problematic when it plays out with entirely new people in a far-away galaxy.)
I found the “court hearing analogy” and the overall discussion of population ethics in terms of the anticipated complains/appeals/preferences of future people a bit confusing (because, as you point out, it’s not clear how it makes sense in light of the non-identity problem). In particular your tentative solution of talking about the interests of ‘interest groups’ seems like it’s kind of veering into the axiological territory that you wanted to avoid, no? As in: groups don’t literally have desires or preferences or goals or interests above and beyond the individuals that make them up. But we can’t compare across individuals here, so it’s not clear how we can meaningfully compare the interests of groups in this sense. So what are we comparing? Well, groups can be said to have different kinds of intrinsic value, and while that value could be manifested/realised/determined only by individuals, you can comfortably compare value across groups with different sets of individuals.
I don’t think my use of “interest groups” smuggles in the need for an objective axiology, but I agree that I should flesh out better how I think about this. (The way it’s currently in the text, it’s hard to interpret how I’m thinking of it.)
Let me create a sketch based on the simplest formulation of the non-identity problem (have child A now or delay pregnancy a few days to have a healthier, better off child B). The challenge is to explain why minimal morality would want us to wait. If we think of objections by newly created people/beings as tied to their personality after birth, then A cannot complain that they weren’t given full consideration of their interests. By contrast, if A and B envision themselves as an “interest group” behind some veil of ignorance of who is who after birth, they would agree that they prefer a procedure where it matters that the better off person is created.
Does that approach introduce axiology back in? In my view, not necessarily. But I can see that there’s a bit of a tension here (e.g., imaging not-yet-born people trying to agree with each other behind a veil of ignorance on demands to make from their potential creators could lead to them agreeing on some kind of axiology? But then again, it depends on their respective psychologies, on “who are the people we’re envisioning creating” The way I think about it, the potential new people would only reach agreement on demands in cases where one action is worse than another action on basically all defensible population-ethical positions. Accordingly, it’s only those no-go actions that minimal morality prohibits.) Therefore, I flagged the non-identity problem as an area of further exploration regarding its implications for my view.
I understand the difference in emphasis between saying that the moral significance of people’s well-being is derivative of its contribution to valuable states of affairs, as contrasted with saying that what makes states of affairs valuable just is people’s well-being (or something to that effect). But I’m curious what this means in a decision-relevant sense?
As I make clear in my replies to Richard Ngo, I’m not against the idea of having an axiology per se. I just claim that there’s no objective axiology. I understand the appeal of having a subjective axiology so I would perhaps even count myself as a “fan of axiology.” (For these reasons, I don’t necessarily disagree with the two examples you gave / your two points about axiology – though I’m not sure I understood the second bullet point. I’m not familiar with that particular concept by Bernard Williams.)
I can explain again the role of subjectivist axiologies in my framework:
Minimal morality doesn’t have an axiology – interests/goals matter for minimal morality, but minimal morality doesn’t tell us exactly what to do under all circumstances. It’s more like a set of constraints principles on what not to do (or things you have to take into account if you want to find a permissible way of doing something). However, minimal morality isn’t satisfying for people with high altruistic motivation. This is where the need for a subjective axiology comes in. And I’m saying that there are several different equally defensible options for subjective axiologies. What this means in practice is mostly just “To not be jerks, people with a subjective axiology like hedonism or tranquilism shouldn’t act like classical utilitarians or hedonist utilitarians in all circumstances. Instead, they should only follow their preferred morality in cases where minimal morality is silent – i.e., where minimal morality permits the implications of these single-minded moral frameworks.
As I say in the text:
Where people have well-specified interests/goals, it would be a preposterous conception of [ambitious (care-)morality] to stick someone into an experience machine against their will or kill them against their will to protect them from future suffering.
(My comment replies to Richard Ngo cover some more points along the same theme.)
Thanks for writing this — I’m curious about approaches like this, and your post felt unusually comprehensive. I also don’t yet feel like I could faithfully represent your view to someone else, possibly because I read this fairly quickly.
Some scattered thoughts / questions below, written in a rush. I expect some or many of them are fairly confused! NNTR.
On this framework, on what grounds can someone not “defensibly ignore” another’s complaint? Am I right in thinking this is because ignoring some complaints means frustrating others’ goals or preferences, and not frustrating others’ goals or preferences is indefensible, as long as we care about getting along/cooperating at all (minimal morality)?
You say The exact reach of minimal morality is fuzzy/under-defined. How much is entailed by “don’t be a jerk?”. This seems important. For instance, you might see ‘drowning child’ framings as (compellling) efforts to move charitable giving within the purview of “you’re a jerk if you don’t do this when you comfortably could.” Especially given the size of the stakes, could you imagine certain longtermist causes like “protecting future generations” similarly being framed as a component of minimal morality?
One speculative way you could do this: you described ‘minimal morality’ as “contractualist” or “cooperation-focused” in spirit. Certainly some acts seem wrong because they just massively undermine the potential for many people living at the same time with many different goals to cooperate on whatever their goals are. But maybe there are some ways in which we collaborate/cooperate/make contracts across (large stretches of) time. Maybe this could ground obligations to future people in minimal morality terms.
I understand the difference in emphasis between saying that the moral significance of people’s well-being is derivative of its contribution to valuable states of affairs, as contrasted with saying that what makes states of affairs valuable just is people’s well-being (or something to that effect). But I’m curious what this means in a decision-relevant sense?
Here’s an analogy: my daily walk isn’t important because it increases the counter on my podometer; rather the counter matters because it says something about how much I’ve walked (and walking is the thing I really care about). To see this, consider that intervening on the counter without actually walking does not matter at all.
But unlike this analogy, fans of axiology might say that “the value of a state of affairs” is not a measure of what matters (actual people and their well-being) that can be manipulated independently of those things; rather it is defined in terms of what you say actually matters, so there is no substantial disagreement beyond one of emphasis (this is why I don’t think I’m on board with ‘further thought’ complaints against aggregative consequentialism). Curious what I’m missing here, though I realise this is maybe also a distraction.
I found the “court hearing analogy” and the overall discussion of population ethics in terms of the anticipated complains/appeals/preferences of future people a bit confusing (because, as you point out, it’s not clear how it makes sense in light of the non-identity problem). In particular your tentative solution of talking about the interests of ‘interest groups’ seems like it’s kind of veering into the axiological territory that you wanted to avoid, no? As in: groups don’t literally have desires or preferences or goals or interests above and beyond the individuals that make them up. But we can’t compare across individuals here, so it’s not clear how we can meaningfully compare the interests of groups in this sense. So what are we comparing? Well, groups can be said to have different kinds of intrinsic value, and while that value could be manifested/realised/determined only by individuals, you can comfortably compare value across groups with different sets of individuals.
Am I right in thinking that in order to creatively duck things like the RP, pinprick argument, arguments against asymmetry (etc) you are rejecting that there is a meaningful “better than” relation between certain states of affairs in population ethics contexts? If so this seems somewhat implausible because there do seem to be some cases where one state of affairs is better than another, and views which say “sure, some comparisons are clear, but others are vague or subjective” seem complicated. Do you just need to opt out of the entire game of “some states of affairs are better than other states of affairs (discontinuous with our own world)”? Curious how you frame this in your own mind.
I had an overall sense that you are both explaining the broad themes of an alternative to populaiton ethics grounded in axiology; and then building your own richer view on top of that (with the court hearing analogy, distinction between minimal and ambitious morality, etc), such that your own view is like a plausible instance of this broad family of alternatives, but doesn’t obviously follow from the original motivation for an alternative? Is that roughly right?
I also had a sense that you could have written a similar post just focused on simpler kinds of aggregative consequentialism (maybe you have in other posts, afraid I haven’t read them all); in some sense you picked an especially ambitious challenge in (i) developing a perspective on ethics that can be applied broadly; and then (ii) applying it to an especially complex part of ethics. So double props I guess!
Thanks for these questions! Your descriptions capture what I meant in most bullet points, but there are some areas where I think I failed to communicate some features of my position.
I’ll reply to your points in a different order than you made them (because that makes a few things easier). I’ll also make several comments in a thread rather than replying to everything at once
That’s right! I’m not particularly attached to the details of the court hearing analogy, for instance. By contrast, the distinction between minimal morality and ambitious morality feels quite central to my framework. I wouldn’t know how to motivate person-affecting views without it. Better developing and explaining my intuition “person-affecting views are more palatable than many people seem to give them credit for” was one of the key motivations I had in writing the post.
(However, like I say in my post’s introduction and the summary, my framework is compatible with subjectivist totalism – if someone wants to dedicate their life toward an ambitious morality of classical total utilitarianism and cooperate with people with other goals in the style of minimal morality, that works perfectly well within the framework [and is even compatible with all the details I suggested for how I would flesh out and apply the framework].)
Yeah. I think the distinction between minimal morality and ambitious morality could have been a standalone post. For what it’s worth, my impression is that many moral anti-realists in EA already internalized something like this distinction. That is, even anti-realists who already know what to value (as opposed to feeling very uncertain and deferring the point where they form convictions to a time after more moral reflection or to the output of a hypothetical “reflection procedure”) tend to respect the fact that others have different goals. I don’t think that’s just because they think they are cooperating with aliens. Instead, as anti-realists, they are perfectly aware that their way of looking at morality isn’t the only one, so they understand they’d need to be jerks in some sense to disrespect others’ goals or moral convictions.
In any case, explaining this distinction took up some space. Then, I added examples and discussions of population ethics issues because I thought a good way to explain the framework is by showing how it handles some of the dilemma cases people are already familiar with.
(Probably you meant to say “and [] frustrating others’ goals or preferences is indefensible”?)
Yes, that’s what it’s about on a first pass. Other things that matter:
The lesser of several evils is always defensible.
If it would be quite demanding to avoid thwarting someone’s interests/goals, then thwarting is defensible. [Minimal morality is low-demanding.]
Yes, I do see the drowning child thought experiment as an example where minimal morality applies!
Regarding “protecting future generations as a component of minimal morality:”
My framework could maybe be adapted to incorporate this, but I suspect it would be difficult to make a coherent version of the framework where the reason we’d (fully/always) count newly created future generations (and “cooperating through time” framings) don’t somehow re-introduce the assumption “something has intrinsic value.” I’d say the most central, most unalterable building blocks of my framework are “don’t use ‘intrinsic value’ (or related concepts) in your framing of the option space” and “think about ethics (at least partly) from the perspective of interests/goals.” So, to argue that minimal morality includes protecting our ability to bring future generations into existence (and actually doing this) regardless of present generations’ concerns about this, you’d have to explain why it’s indefensible/being a jerk to prioritize existing people over people who could exist. The relevant arguments I brought up against this are this section, which includes endnote 21 for my main argument. I’ll quote them here:
And here the endnote:
A key ingredient to my argument is that there’s no “universal psychology” that makes all possible people have the same interests/goals or the same way of thinking about existence vs. non-existence. Therefore, we can’t say “being born into a happy life is good for anyone.” At best, we could say “being born into a happy life is good for the sort of person who would find themselves grateful for it and would start to argue for totalist population ethics once they’re alive.” This begs the question: What about happy people who develop a different view on population ethics?
I develop this theme in a bunch of places throughout the article, for instance in places where I comment on the specific ways interests/goals-based ethics seem under-defined:
Point (3) in particular is sometimes under-appreciated. Without an objective axiology, I don’t think we can generalize about what’s good for newly created people/beings – there’s always the question “Which ones??”
Accordingly, there (IMO) seems to be an asymmetry here related to how creating a particular person singles out that particular person’s psychology in a way that not creating anyone does not. When you create a particular person, you better make sure that this particular person doesn’t object to what you did.
(You could argue that we just have to create happy people who will be grateful for their existence – but that would still feel a bit arbitrary in the sense that you’re singling out a particular type of psychology (why focus on people with the potential for gratefulness to exist?), and it would imply things like “creating a happy Buddhist monk has no moral value, but creating a happy life-hungry enterprenuer or explorer has great moral value.” In the other direction, you could challenge the basis for my asymmetry by calling into question whether only looking at a new mind’s self-assessment about their existence is too weak to prevent bad things. You could ask “What if we created a mind that doesn’t mind being in misery? Would it be permissible to engineer slaves who don’t mind working hard under miserable conditions?” In reply to that, I’d point out how even if the mind ranks death after being born as worse than anything else, that doesn’t make it okay to bring such a conflicted being into existence. The particular mind in question wouldn’t object to what you did, but nowhere in your decision to create that particular mind did you show any concern for newly created people/beings – otherwise you’d have created minds that don’t let you exploit them maximally and don’t have the type of psychology that puts them into internally conflicted states like “ARRRGHH PAIN ARRRGHH PAIN ARRRGHH PAIN, but I have to keep existing, have to keep going!!!” You’d only ever create that particular type of mind if you wanted to get away with not having to care about the mind’s well-being, and this isn’t a defensible motive under minimal morality.)
At this point, I want to emphasize that the main appeal of minimal morality is that it’s really uncontroversial. Whether potential people count the same as existing and sure-to-exist people is quite a controversial issue. My framework doesn’t say “possible people don’t count.” It only says that it’s wrong to think everyone has to care about potential happy future people.
All that said, the fact that EAs exist who stake most or even all of their caring budget on helping future generations come into a flourishing existence is an indirect reason why minimal morality may include caring for future generations! So, minimal morality takes this detour – which you might find a counterintuitive reason to care about the future, but nonetheless – where one reason people should care about future generations (in low-demanding ways) is because many other existing people care strongly and sincerely about there being future generations.
This description doesn’t quite resonate with me. Maybe it’s close, though. I would rephrase it as something like “We can often say that one outcome is meaningfully better than another outcome on some list of evaluation criteria, but there’s no objective list of evaluation criteria that everyone ought to follow. (But that’s just a rephrasing of the central point “there’s no objective axiology.”)
I want to emphasize that I agree that, e.g., there’s an important sense in which creating a merely medium-happy person is worse than creating a very happy person. My framework allows for this type of better-than relation even though my framework also says “person-affecting views are a defensible systematization of ambitious morality.” How are these two takes compatible? Here, I point out that person-affecting views aren’t about what’s best for newly created people/beings. Person-affecting views, the way I motivate them in my framework, would be about doing what’s best for existing and sure-to-exist people/beings. Sometimes, existing and sure-to-exist people/beings want to bring new people/beings into existence. In those cases, we need some guidance about things like whether it’s permissible to bring unhappy people into existence. My solution: minimal morality already has a few things to say about how not to create new people/beings. So, person-affecting views are essentially about maximally benefitting the interests/goals of existing people/beings without overstepping minimal morality when it comes to new people/beings.
The above explains how my view “creatively ducks” arguments against the asymmetry.
I wouldn’t necessarily say that my view ducks the repugnant conclusion – at least not in all instances! Mostly, my view avoids versions of the repugnant conclusion where the initial paradise-like population is already actual. This also means it blocks the very repugnant conclusion. By contrast, when it comes to a choice between colonizing a new galaxy with either a small paradise-like population or a very large population with people with lower but still positive life quality, my framework actually says “both of these are compatible with minimal morality.”
(Personally, I’ve always found the repugnant conclusion a lot less problematic when it plays out with entirely new people in a far-away galaxy.)
I don’t think my use of “interest groups” smuggles in the need for an objective axiology, but I agree that I should flesh out better how I think about this. (The way it’s currently in the text, it’s hard to interpret how I’m thinking of it.)
Let me create a sketch based on the simplest formulation of the non-identity problem (have child A now or delay pregnancy a few days to have a healthier, better off child B). The challenge is to explain why minimal morality would want us to wait. If we think of objections by newly created people/beings as tied to their personality after birth, then A cannot complain that they weren’t given full consideration of their interests. By contrast, if A and B envision themselves as an “interest group” behind some veil of ignorance of who is who after birth, they would agree that they prefer a procedure where it matters that the better off person is created.
Does that approach introduce axiology back in? In my view, not necessarily. But I can see that there’s a bit of a tension here (e.g., imaging not-yet-born people trying to agree with each other behind a veil of ignorance on demands to make from their potential creators could lead to them agreeing on some kind of axiology? But then again, it depends on their respective psychologies, on “who are the people we’re envisioning creating” The way I think about it, the potential new people would only reach agreement on demands in cases where one action is worse than another action on basically all defensible population-ethical positions. Accordingly, it’s only those no-go actions that minimal morality prohibits.) Therefore, I flagged the non-identity problem as an area of further exploration regarding its implications for my view.
As I make clear in my replies to Richard Ngo, I’m not against the idea of having an axiology per se. I just claim that there’s no objective axiology. I understand the appeal of having a subjective axiology so I would perhaps even count myself as a “fan of axiology.” (For these reasons, I don’t necessarily disagree with the two examples you gave / your two points about axiology – though I’m not sure I understood the second bullet point. I’m not familiar with that particular concept by Bernard Williams.)
I can explain again the role of subjectivist axiologies in my framework:
Minimal morality doesn’t have an axiology – interests/goals matter for minimal morality, but minimal morality doesn’t tell us exactly what to do under all circumstances. It’s more like a set of constraints principles on what not to do (or things you have to take into account if you want to find a permissible way of doing something). However, minimal morality isn’t satisfying for people with high altruistic motivation. This is where the need for a subjective axiology comes in. And I’m saying that there are several different equally defensible options for subjective axiologies. What this means in practice is mostly just “To not be jerks, people with a subjective axiology like hedonism or tranquilism shouldn’t act like classical utilitarians or hedonist utilitarians in all circumstances. Instead, they should only follow their preferred morality in cases where minimal morality is silent – i.e., where minimal morality permits the implications of these single-minded moral frameworks.
As I say in the text:
(My comment replies to Richard Ngo cover some more points along the same theme.)