I can’t tell what you mean by an objective axiology. It seems to me like you’re equivocating between a bunch of definitions:
An axiology is objective if it is universally true / independent of the decision-maker / not reliant on goals / implied by math. (I’m pointing to a cluster of intuitions rather than giving a precise definition.)
An axiology is objective if it provides a decision for every possible situation you could be in. (I would prefer to call this a “complete” axiology, perhaps.)
An axiology is objective if its decisions can be computed by taking each world, summing some welfare function over all the people in that world, and choosing the decision that leads to the world with a higher number. (I would prefer to call this an “aggregative” axiology, perhaps.)
Examples of definition 1:
The search for an objective axiology assumes that there’s a well-defined “impartial perspective” that determines what’s intrinsically good/valuable. [...]
if there was an objective axiology, wouldn’t the people who don’t orient their goals around that axiology be making a mistake?
Examples of definition 2:
Without an objective axiology, the placeholder “do what’s most moral/altruistic” is under-defined. [...]
I think there’s an incongruence behind how people think of population ethics in the standard way. (The standard way being something like: look for an objective axiology, something that has “intrinsic value,” then figure out how we are to relate to that value/axiology and whether to add extra principles around it.)
Examples of definition 3:
we can note that population ethics has two separate perspectives: that of existing people/beings and that of newly created people/beings. (Without an objective axiology, these perspectives cannot be unified.)
I don’t think I’m relying on an objective-axiology-by-definition-1. Any time I say “good” you can think of it as “good according to the decision-maker” rather than “objectively good”. I think this doesn’t affect any of my arguments.
It is true that I am imagining an objective-axiology-by-definition-2 (which I would perhaps call a “complete axiology”). I don’t really see from your comment why this is a problem.
I agree this is “maximally ambitious morality” rather than “minimal morality”. Personally if I were designing “minimal morality” I’d figure out what “maximally ambitious morality” would recommend we design as principles that everyone could agree on and follow, and then implement those. I’m skeptical that if I ran through such a procedure I’d end up choosing person-affecting intuitions (in the sense of “Making People Happy, Not Making Happy People”, I think I plausibly would choose something along the lines of “if you create new people make sure they have lives well-beyond-barely worth living”). Other people might differ from me, since they have different goals, but I suspect not.
I agree that if your starting point is “I want to ensure that people’s preferences are satisfied” you do not yet have a complete axiology, and in particular there’s an ambiguity about how to make decisions about which people to create. If this is your starting point then I think my post is saying “if you resolve this ambiguity in this particular way, you get Dutch booked”. I agree that you could avoid the Dutch book by resolving the ambiguity as “I will only create individuals whose preferences I have satisfied as best as I can”.
Personally if I were designing “minimal morality” I’d figure out what “maximally ambitious morality” would recommend we design as principles that everyone could agree on and follow, and then implement those.
I think this is a crux between us (or at least an instance where I didn’t describe very well how I think of “minimal morality”). (A lot of the other points I’ve been making, I see mostly as “here’s a defensible alternative to Rohin’s view” rather than “here’s why Rohin is wrong to not find (something like) person-affecting principles appealing.”)
In my framework, it wouldn’t be fair to derive minimal morality from a specific take on maximally ambitious morality. People who want to follow some maximally ambitious morality (this includes myself) won’t all pick the same interpretation of what that means. Not just for practical reasons, but fundamentally: for maximally ambitious morality, different interpretations are equally philosophically defensible.
Some people may have the objection “Wait, if maximally ambitious morality is under-defined, why adopt confident and specific views for how you want things to be? Why not keep your views on it under-defined, too?” (See Richard Ngo’s post on Moral indefinability.) I have answered this objection in this section of my post The Moral Uncertainty Rabbit Hole, Fully Excavated. In short, I give an analogy between “doing what’s maximally moral” and “becoming ideally athletically fit.” In the analogy, someone grows up with the childhood dream of becoming “ideally athletically fit” in a not-further-specified way. They then have the insight that “becoming ideally athletically fit” has different defensible interpretations – e.g., the difference between a marathon runner or a 100m-sprinter ((or someone who is maximally fit in reducing heart attack risks – which are actually elevated for professional athletes!)). Now, it is an open question for them whether to care about a specific interpretation of the target concept or whether to embrace under-definedness. My advice to them for resolving this question is “think about which aspects of fitness you feel most drawn to, if any.”
Minimal morality is the closest we can come to something “objective” in the sense that it’s possible for philosophically sophisticated reasoners to all agree on it (your first interpretation). (This is precisely because minimal morality is unambitious – it only tells us to not be jerks; it doesn’t give clear guidance for what else to do.)
Minimal morality will feel unsatisfying to anyone who finds effective altruism appealing, so we want to go beyond it in places. However, within my framework, we can only go beyond it by forming morality/altruism-inspired life goals that, while we try to make them impartial/objective, have to inevitably lock in subjective judgment calls. (E.g., “Given that you can’t be both at once, do you want to be maximally impartially altruistic towards existing people or towards newly created people?” or “Assuming the latter, given that different types of newly created people will have different views on what’s good or bad for them, how will define for yourself what it means to maximally benefit (which?) newly created people?”)
It is true that I am imagining an objective-axiology-by-definition-2 (which I would perhaps call a “complete axiology”). I don’t really see from your comment why this is a problem.
I agree it’s not a problem as long as you’re choosing that sort of success criterion (that you want a complete axiology) freely, rather than thinking it’s a forced move. (My sense is that you already don’t think of it as a forced move, so I should have been more clear that I wasn’t necessarily arguing against your views.)
I agree that if your starting point is “I want to ensure that people’s preferences are satisfied” you do not yet have a complete axiology, and in particular there’s an ambiguity about how to make decisions about which people to create. If this is your starting point then I think my post is saying “if you resolve this ambiguity in this particular way, you get Dutch booked”. I agree that you could avoid the Dutch book by resolving the ambiguity as “I will only create individuals whose preferences I have satisfied as best as I can”.
Yes, that describes it very well!
That said, I’m mostly arguing for a framework* for how to think about population ethics rather than a specific, object-level normative theory. So, I’m not saying the solution to population ethics is “existing people’s life goals get comparatively a lot of weight.” I’m only pointing out how that seems like a defensible position, given that the alternative would be to somewhat arbitrarily give them comparatively very little weight.
*By “framework,” I mean a set of assumptions for thinking about a domain, answering questions like “What am I trying to figure out?”, “What makes for a good solution?” and “What are the concepts I want to use to reason successfully about this domain?”
I can’t tell what you mean by an objective axiology. It seems to me like you’re equivocating between a bunch of definitions:
I like the three interpretations of “objective” that you distilled!
I use the word “objective” in the first sense, but you noted correctly that I’m arguing as though rejecting “there’s an objective axiology” in that sense implies other things, too. (I should make these hidden inferences explicit in future versions of the summary!)
I’d say I’ve been arguing hard against the first interpretation of “objective axiology” and softly against your second and third descriptions of desirable features of an axiology/”answer to population ethics.”
By “arguing hard” I mean “anyone who thinks this is wrong.” By “arguing softly” I mean “that may be defensible, but there are other defensible alternatives.”
So, on the question of success criteria for answers to population ethics (whether we’re looking for a complete axiology, per your 2nd description, and whether the axiology should “fall out” naturally from world states, rather than be specific to histories or to “who’s the person with the choice?”, per your 3rd description)… On those questions, I think it’s perfectly defensible to end up with answers that satisfy each respective criterion, but I think it’s important to keep the option space open while we’re discussing population ethics within a community, so we aren’t prematurely locking in that “solutions to population ethics” need to be of a specific form. (It shouldn’t become uncool within EA to conceptualize things differently, if the alternatives are well formed / well argued.)
I think there’s a practical effect where people who think “ethics is objective” (in the first sense) might prematurely restrict their option space. (This won’t apply to you.) I think they’re looking for the sort of (object-level) normative theories that can fulfill the steep demands of objectivity – theories that all philosophically sophisticated others could agree on despite the widespread differences in people’s moral intuitions. With this constraint, one is likely to view it as a positive feature that a theory is elegantly simple, even if it demands a lot of “bullet biting.” (Moral reasoners couldn’t agree on the same answer if they all relied too much on their moral intuitions, which are different from person to person.)
In other words, if we thought that morality was a coordination game where we try to guess what everyone else is trying to guess will be the answer everyone converges on (and we also have priors that the answer is about “altruism” and “impartiality”), then we’d come up with different solutions than if we started without the “coordination game” assumption.
In any case, theories that fit your second and third description tend to be simpler, so they’re more appealing to people who endorse “ethics is objective” (in the first sense). That’s the link I see between the three descriptions. It’s no coincidence that the examples I gave in my previous comment (moral issues around procreation) track common sense ethics. The less we think “morality is objective” (in the first sense), the more alternatives we have to biting specific bullets.
I can’t tell what you mean by an objective axiology. It seems to me like you’re equivocating between a bunch of definitions:
An axiology is objective if it is universally true / independent of the decision-maker / not reliant on goals / implied by math. (I’m pointing to a cluster of intuitions rather than giving a precise definition.)
An axiology is objective if it provides a decision for every possible situation you could be in. (I would prefer to call this a “complete” axiology, perhaps.)
An axiology is objective if its decisions can be computed by taking each world, summing some welfare function over all the people in that world, and choosing the decision that leads to the world with a higher number. (I would prefer to call this an “aggregative” axiology, perhaps.)
Examples of definition 1:
Examples of definition 2:
Examples of definition 3:
I don’t think I’m relying on an objective-axiology-by-definition-1. Any time I say “good” you can think of it as “good according to the decision-maker” rather than “objectively good”. I think this doesn’t affect any of my arguments.
It is true that I am imagining an objective-axiology-by-definition-2 (which I would perhaps call a “complete axiology”). I don’t really see from your comment why this is a problem.
I agree this is “maximally ambitious morality” rather than “minimal morality”. Personally if I were designing “minimal morality” I’d figure out what “maximally ambitious morality” would recommend we design as principles that everyone could agree on and follow, and then implement those. I’m skeptical that if I ran through such a procedure I’d end up choosing person-affecting intuitions (in the sense of “Making People Happy, Not Making Happy People”, I think I plausibly would choose something along the lines of “if you create new people make sure they have lives well-beyond-barely worth living”). Other people might differ from me, since they have different goals, but I suspect not.
I agree that if your starting point is “I want to ensure that people’s preferences are satisfied” you do not yet have a complete axiology, and in particular there’s an ambiguity about how to make decisions about which people to create. If this is your starting point then I think my post is saying “if you resolve this ambiguity in this particular way, you get Dutch booked”. I agree that you could avoid the Dutch book by resolving the ambiguity as “I will only create individuals whose preferences I have satisfied as best as I can”.
I think this is a crux between us (or at least an instance where I didn’t describe very well how I think of “minimal morality”). (A lot of the other points I’ve been making, I see mostly as “here’s a defensible alternative to Rohin’s view” rather than “here’s why Rohin is wrong to not find (something like) person-affecting principles appealing.”)
In my framework, it wouldn’t be fair to derive minimal morality from a specific take on maximally ambitious morality. People who want to follow some maximally ambitious morality (this includes myself) won’t all pick the same interpretation of what that means. Not just for practical reasons, but fundamentally: for maximally ambitious morality, different interpretations are equally philosophically defensible.
Some people may have the objection “Wait, if maximally ambitious morality is under-defined, why adopt confident and specific views for how you want things to be? Why not keep your views on it under-defined, too?” (See Richard Ngo’s post on Moral indefinability.) I have answered this objection in this section of my post The Moral Uncertainty Rabbit Hole, Fully Excavated. In short, I give an analogy between “doing what’s maximally moral” and “becoming ideally athletically fit.” In the analogy, someone grows up with the childhood dream of becoming “ideally athletically fit” in a not-further-specified way. They then have the insight that “becoming ideally athletically fit” has different defensible interpretations – e.g., the difference between a marathon runner or a 100m-sprinter ((or someone who is maximally fit in reducing heart attack risks – which are actually elevated for professional athletes!)). Now, it is an open question for them whether to care about a specific interpretation of the target concept or whether to embrace under-definedness. My advice to them for resolving this question is “think about which aspects of fitness you feel most drawn to, if any.”
Minimal morality is the closest we can come to something “objective” in the sense that it’s possible for philosophically sophisticated reasoners to all agree on it (your first interpretation). (This is precisely because minimal morality is unambitious – it only tells us to not be jerks; it doesn’t give clear guidance for what else to do.)
Minimal morality will feel unsatisfying to anyone who finds effective altruism appealing, so we want to go beyond it in places. However, within my framework, we can only go beyond it by forming morality/altruism-inspired life goals that, while we try to make them impartial/objective, have to inevitably lock in subjective judgment calls. (E.g., “Given that you can’t be both at once, do you want to be maximally impartially altruistic towards existing people or towards newly created people?” or “Assuming the latter, given that different types of newly created people will have different views on what’s good or bad for them, how will define for yourself what it means to maximally benefit (which?) newly created people?”)
I agree it’s not a problem as long as you’re choosing that sort of success criterion (that you want a complete axiology) freely, rather than thinking it’s a forced move. (My sense is that you already don’t think of it as a forced move, so I should have been more clear that I wasn’t necessarily arguing against your views.)
Yes, that describes it very well!
That said, I’m mostly arguing for a framework* for how to think about population ethics rather than a specific, object-level normative theory. So, I’m not saying the solution to population ethics is “existing people’s life goals get comparatively a lot of weight.” I’m only pointing out how that seems like a defensible position, given that the alternative would be to somewhat arbitrarily give them comparatively very little weight.
*By “framework,” I mean a set of assumptions for thinking about a domain, answering questions like “What am I trying to figure out?”, “What makes for a good solution?” and “What are the concepts I want to use to reason successfully about this domain?”
I like the three interpretations of “objective” that you distilled!
I use the word “objective” in the first sense, but you noted correctly that I’m arguing as though rejecting “there’s an objective axiology” in that sense implies other things, too. (I should make these hidden inferences explicit in future versions of the summary!)
I’d say I’ve been arguing hard against the first interpretation of “objective axiology” and softly against your second and third descriptions of desirable features of an axiology/”answer to population ethics.”
By “arguing hard” I mean “anyone who thinks this is wrong.” By “arguing softly” I mean “that may be defensible, but there are other defensible alternatives.”
So, on the question of success criteria for answers to population ethics (whether we’re looking for a complete axiology, per your 2nd description, and whether the axiology should “fall out” naturally from world states, rather than be specific to histories or to “who’s the person with the choice?”, per your 3rd description)… On those questions, I think it’s perfectly defensible to end up with answers that satisfy each respective criterion, but I think it’s important to keep the option space open while we’re discussing population ethics within a community, so we aren’t prematurely locking in that “solutions to population ethics” need to be of a specific form. (It shouldn’t become uncool within EA to conceptualize things differently, if the alternatives are well formed / well argued.)
I think there’s a practical effect where people who think “ethics is objective” (in the first sense) might prematurely restrict their option space. (This won’t apply to you.) I think they’re looking for the sort of (object-level) normative theories that can fulfill the steep demands of objectivity – theories that all philosophically sophisticated others could agree on despite the widespread differences in people’s moral intuitions. With this constraint, one is likely to view it as a positive feature that a theory is elegantly simple, even if it demands a lot of “bullet biting.” (Moral reasoners couldn’t agree on the same answer if they all relied too much on their moral intuitions, which are different from person to person.)
In other words, if we thought that morality was a coordination game where we try to guess what everyone else is trying to guess will be the answer everyone converges on (and we also have priors that the answer is about “altruism” and “impartiality”), then we’d come up with different solutions than if we started without the “coordination game” assumption.
In any case, theories that fit your second and third description tend to be simpler, so they’re more appealing to people who endorse “ethics is objective” (in the first sense). That’s the link I see between the three descriptions. It’s no coincidence that the examples I gave in my previous comment (moral issues around procreation) track common sense ethics. The less we think “morality is objective” (in the first sense), the more alternatives we have to biting specific bullets.