Thanks. That is an interesting argument, and this isn’t the first time I’ve heard it, but I think I see its significance to the issue more clearly now.
I will have to think about this more. My gut reaction is: I don’t trust my ability to extrapolate out that many orders of magnitude into the future. So, yes, this is a good first-principles physics argument about the limits to growth. (Much better than the people who stop at pointing out that “the Earth is finite”). But once we’re even 10^12 away from where we are now, let alone 10^200, who knows what we’ll find? Maybe we’ll discover FTL travel (ok, unlikely). Maybe we’ll at least be expanding out to other galaxies. Maybe we’ll have seriously decoupled economic growth from physical matter: maybe value to humans is in the combinations and arrangements of things, rather than things themselves—bits, not atoms—and so we have many more orders of magnitude to play with.
If you’re not willing to apply a moral discount factor against the far future, shouldn’t we at least, at some point, apply an epistemic discount? Are we so certain about progress/growth being a brief, transient phase that we’re willing to postpone the end of it by literally the length of human civilization so far, or longer?
Are we so certain about progress/growth being a brief, transient phase that we’re willing to postpone the end of it by literally the length of human civilization so far, or longer?
I think this actually does point to a legitimate and somewhat open question on how to deal with uncertainty between different ‘worldviews’. Similar to Open Phil, I’m using worldview to refer to a set of fundamental beliefs that are an entangled mix of philosophical and empirical claims and values.
E.g., suppose I’m uncertain between:
Worldview A, according to which I should prioritize based on time scales of trillions of years.
Worldview B, according to which I should prioritize based on time scales of hundreds of years.
This could be for a number of reasons: an empirical prediction that civilization is going to end after a few hundred years; ethical commitments such as pure time preference, person-affecting views, egoism, etc.; or epistemic commitments such as high-level heuristics for how to think about long time scales or situations with significant radical uncertainty.
One way to deal with this uncertainty is to put both value on a “common scale”, and then apply expected value: perhaps on worldview A, I can avert quintillions of expected deaths while on worldview B “only” a trillions lives are at stake in my decision. Even if I only have a low credence in A, after applying expected value I will then end up making decisions based just on A.
But this is not the only game in town. We might instead think of A and B as two groups of people with different interests trying to negotiate an agreement. In that case, we may have the intuition that A should make some concessions to B even if A was a much larger group, or was more powerful, or similar. This can motivate ideas such as variance normalization or the ‘parliamentary approach’.
Now, I do have views on this matter that don’t make me very sympathetic to allocating a significant chunk of my resources to, say, speeding up economic growth or other things someone concerned about next few decades might prioritize. (Both because of my views on normative uncertainty and because I’m not aware of anything sufficiently close to ‘worldview B’ that I find sufficiently plausible—these kind of worldviews from my perspective sit in too awkward a spot between impartial consequentialism and a much more ‘egoistic’, agent-relative, or otherwise nonconsequentialist perspective.)
But I do think that the most likely way that someone could convince me to, say, donate a signifcant fraction of my income to ‘progress studies’ or AMF or The Good Food Institute (etc.) would be by convincing me that actually I want to aggregate different ‘worldviews’ I find plausible in a different way. This certainly seems more likely to change my mind than an argument aiming to show that, when we take longtermism for granted, we should prioritize one of these other things.
[ETA: I forgot to add that another major consideration is that, at least on some plausible estimates and my own best guess, existential risk this century is so high—and our ability to reduce it sufficiently good—that even if I thought I should prioritize primarily based on short time scales, I might well end up prioritizing reducing x-risk anyway. See also, e.g., here.]
Worldview A, according to which I should prioritize based on time scales of trillions of years.
Worldview B, according to which I should prioritize based on time scales of hundreds of years.
[...]
Now, I do have views on this matter that don’t make me very sympathetic to allocating a significant chunk of my resources to, say, speeding up economic growth or other things someone concerned about next few decades might prioritize. (Both because of my views on normative uncertainty and because I’m not aware of anything sufficiently close to ‘worldview B’ that I find sufficiently plausible—these kind of worldviews from my perspective sit in too awkward a spot between impartial consequentialism and a much more ‘egoistic’, agent-relative, or otherwise nonconsequentialist perspective.)
I think I have a candidate for a “worldview B” that some EAs may find compelling. (Edit: Actually, the thing I’m proposing also allocates some weight to trillions of years, but it differs from your “worldview A” in that nearer-term considerations don’t get swamped!) It requires a fair bit of explaining, but IMO that’s because it’s generally hard to explain how a framework differs from another framework when people are used to only thinking within a single framework. I strongly believe that if moral philosophy had always operated within my framework, the following points would be way easier to explain.
Anyway, I think standard moral-philosophical discourse is a bit dumb in that it includes categories without clear meaning. For instance, the standard discourse talks about notions like, “What’s good from a universal point of view,” axiology/theory of value, irreducibly normative facts, etc.
The above notions fail at reference – they don’t pick out any unambiguously specified features of reality or unambiguously specified sets from the option space of norms for people/agents to adopt.
You seem to be unexcited about approaches to moral reasoning that are more “more ‘egoistic’, agent-relative, or otherwise nonconsequentialist” than the way you think moral reasoning should be done. Probably, “the way you think moral reasoning should be done” is dependent on some placeholder concepts like “axiology” or “what’s impartially good” that would have to be defined crisply if we wanted to completely solve morality according to your preferred evaluation criteria. Consider the possibility that, if we were to dig into things and formalize your desired criteria, you’d realize that there’s a sense in which any answer to population ethics has to be at least a little bit ‘egoistic’ or agent-relative. Would this weaken your intuitions that person-affecting views are unattractive?
I’ll try to elaborate now why I believe “There’s a sense in which any answer to population ethics has to be at least a little bit ‘egoistic’ or agent-relative.”
Basically, I see a tension between “there’s an objective axiology” and “people have the freedom to choose life goals that represent their idiosyncrasies and personal experiences.” If someone claims there’s an objective axiology, they’re implicitly saying that anyone who doesn’t adopt an optimizing mindset around successfully scoring “utility points” according to that axiology is making some kind of mistake / isn’t being optimally rational. They’re implicitly saying it wouldn’t make sense for people (at least for people who are competent/organized enough to reliably pursue long-term goals) to live their lives in pursuit of anything other than “pursuing points according to the one true axiology.” Note that this is a strange position to adopt! Especially when we look at the diversity between people, what sorts of lives they find the most satisfying (e.g., differences between investment bankers, MMA fighters, novelists, people who open up vegan bakeries, people for whom family+children means everything, those EA weirdos, etc.), it seems strange to say that all these people should conclude that they ought to prioritize surviving until the Singularity so as to get the most utility points overall. To say that everything before that point doesn’t really matter by comparison. To say that and any romantic relationships people enter are only placeholders until something better comes along with experience-machine technology.
Once you give up on the view that there’s an objectively correct axiology (as well as the view that you ought to follow a wager for the possibility of it), all of the above considerations (“people differ according to how they’d ideally want to score their own lives”) will jump out at you, no longer suppressed by this really narrow and fairly weird framework of “How can we subsume all of human existence into utility points and have debates on whether we should adopt ‘totalism’ toward the utility points, or come up with a way to justify taking a person-affecting stance.”
There’s a common tendency in EA to dismiss the strong initial appeal of person-affecting views because there’s no elegant way to incorporate them into the moral realist “utility points” framework. But one person’s modus ponens is another’s modus tollens: Maybe if your framework can’t incorporate person-affecting intuitions, that means there’s something wrong with the framework.
I suspect that what’s counterintuitive about totalism in population ethics is less about the “total”/”everything” part of it, and more related to what’s counterintuitive about “utility points” (i.e., the postulate that there’s an objective, all-encompassing axiology). I’m pretty convinced that something like person-affecting views, though obviously conceptualized somewhat differently (since we’d no longer be assuming moral realism) intuitively makes a lot of sense.
Here’s how that would work (now I’ll describe the new proposal for how to do ethical reasoning):
Utility is subjective. What’s good for someone is what they deem good for themselves by their lights, the life goals for which they get up in the morning and try doing their best.
A beneficial outcome for all of humanity could be defined by giving individual humans the possibility to reflect about their goals in life under ideal conditions to then implement some compromise (e.g., preference utilitarianism, or – probably better – a moral parliament framework) to make everyone really happy with the outcome.
Preference utilitarianism or the moral parliament framework would concern people who already exist – these frameworks’ population-ethical implications are indirectly specified, in the sense that they depend on what the people on earth actually want. Still, people individually have views about how they want the future to go. Parents may care about having more children, many people may care about intelligent earth-originating life not going extinct, some people may care about creating as much hedonium as possible in the future, etc.
In my worldview, I conceptualize the role of ethics as two-fold:
(1) Inform people about the options for wisely chosen subjective life goals
--> This can include life goals inspired by a desire to do what’s “most moral” / “impartial” / “altruistic,” but it can also include more self-oriented life goals
(2) Provide guidance for how people should deal with the issue that not everyone shares the same life goals
Population ethics, then, is a subcategory of (1). Assuming you’re looking for an altruistic life goal rather than a self-oriented one, you’re faced with the question of whether your notion of “altruism” includes bringing happy people into existence. No matter what you say, your answer to population ethics will be, in a weak sense, ‘egoistic’ or agent-relative, simply because you’re not answering “What’s the right population ethics for everyone.” You’re just answering, “What’s my vote for how to allocate future resources.” (And you’d be trying to make your vote count in an altruistic/impartial way – but you don’t have full/single authority on that.)
If moral realism is false, notions like “optimal altruism” or “What’s impartially best” are under-defined. Note that under-definedness doesn’t mean “anything goes” – clearly, altruism has little to do with sorting pebbles or stacking cheese on the moon. “Altruism is under-defined” just means that there are multiple ‘good’ answers.
Finally, here’s the “worldview B” I promised to introduce:
Within the anti-realist framework I just outlined, altruistically motivated people have to think about their preferences for what to do with future resources. And they can – perfectly coherently – adopt the view: “Because I have person-affecting intuitions, I don’t care about creating new people; instead, I want to focus my ‘altruistic’ caring energy on helping people/beings that exist regardless of my choices. I want to help them by fulfilling their life goals, and by reducing the suffering of sentient beings that don’t form world-models sophisticated enough to qualify for ‘having life goals’.”
Note that a person who thinks this may end up caring a great deal about humans not going extinct. However, unlike in the standard framework for population ethics, she’d care about this not because she thinks it’s impartially good for the future to contain lots of happy people. Instead, she thinks it’s good from the perspective of the life goals of specific, existing others, for the future to go on and contain good things.
Is that really such a weird view? I really don’t think so, myself. Isn’t it rather standard population-ethical discourse that’s a bit weird?
Edit: (Perhaps somewhat related: my thoughts on the semantics of what it could mean that ‘pleasure is good’. My impression is that some people think there’s an objectively correct axiology because they find experiential hedonism compelling in a sort of ‘conceptual’ way, which I find very dubious.)
I hope to have time to read your comment and reply in more detail later, but for now just one quick point because I realize my previous comment was unclear:
I am actually sympathetic to an “‘egoistic’, agent-relative, or otherwise nonconsequentialist perspective”. I think overall my actions are basically controlled by some kind of bargain/compromise between such a perspective (or perhaps perspectives) and impartial consequentialism.
The point is just that, from within these other perspectives, I happen to not be that interested in “impartially maximize value over the next few hundres of years”. I endorse helping my friends, maybe I endorse volunteering in a soup kitchen or something like that; I also endorse being vegetarian or donating to AMF, or otherwise reducing global poverty and inequality (and yes, within these ‘causes’ I tend to prefer larger over smaller effects); I also endorse reducing far-future s-risks and current wild animal suffering, but not quite as much. But this is all more guided by responding to reactive attitudes like resentment and indignation than by any moral theory. It looks a lot like moral particularism, and so it’s somewhat hard to move me with arguments in that domain (it’s not impossible, but it would require something that’s more similar to psychotherapy or raising a child or “things the humanities do” than to doing analytic philosophy).
So this roughly means that if you wanted to convince me to do X, then you either need to be “lucky” that X is among the things I happen to like for idiosyncratic reasons—or X needs to look like a priority from an impartially consequentialist outlook.
It sounds like we both agree that when it comes to reflecting about what’s important to us, there should maybe be a place for stuff like “(idiosyncratic) reactive attitudes,” “psychotherapy or raising a child or ‘things the humanities do’” etc.
Your view seems to be that you have two modes of moral reasoning: The impartial mode of analytic philosophy, and the other thing (subjectivist/particularist/existentialist).
My point with my long comment earlier is basically the following: The separation between these two modes is not clear!
I’d argue that what you think of the “impartial mode” has some clear-cut applications, but it’s under-defined in some places, so different people will gravitate toward different ways of approaching the under-defined parts, based on using appeals that you’d normally place in the subjectivist/particularist/existentialist mode.
Specifically, population ethics is under-defined. (It’s also under-defined how to extract “idealized human preferences” from people like my parents, who aren’t particularly interested in moral philosophy or rationality.)
I’m trying to point out that if you fully internalized that population ethics is going to be under-defined no matter what, you then have more than one option for how to think about it. You no longer have to think of impartiality criteria and “never violating any transitivity axioms” as the only option. You can think of population ethics more like this: Existing humans have a giant garden (the ‘cosmic commons’) that is at risk of being burnt, and they can do stuff with it if they manage to preserve it, and people have different preferences about what definitely should or shouldn’t be done with that garden. You can look for the “impartially best way to make use of the garden” – or you could look at how other people want to use the garden and compromise with them, or look for “meta-principles” that guide who gets to use which parts of the garden (and stuff that people definitely shouldn’t do, e.g., no one should shit in their part of the garden), without already having a fixed vision for how the garden has to look like at the end, once it’s all made use of. Basically, I’m saying that “knowing from the very beginning exactly what the ‘best garden’ has to look like, regardless of the gardening-related preferences of other humans, is not a forced move (especially because there’s no universally correct solution anyway!). You’re very much allowed to think of gardening in a different, more procedural and ‘particularist’ way.”
Thanks! I think I basically agree with everything you say in this comment. I’ll need to read your longer comment above to see if there is some place where we do disagree regarding the broadly ‘metaethical’ level (it does seem clear we land on different object-level views/preferences).
In particular, while I happen to like a particular way of cashing out the “impartial consequentialist” outlook, I (at least on my best-guess view on metaethics) don’t claim that my way is the only coherent or consistent way, or that everyone would agree with me in the limit of ideal reasoning, or anything like that.
Thanks for sharing your reaction! I actually agree with some of it:
I do think it’s good to retain some skepticism about our ability to understand the relevant constraints and opportunities that civilization would face in millions or billions of years. I’m not 100% confident in the claims from my previous comment.
In particular, I have non-zero credence in views that decouple moral value from physical matter. And on such views it would be very unclear what limits to growth we’re facing (if any).
But if ‘moral value’ is even roughly what I think it is (in particular, requires information processing), then this seems similarly unlikely as FTL travel being possible: I’m not a physicist, but my rough understanding is that there is only so much computation you can do with a given amount of energy or negentropy or whatever the relevant quantity is.
However, for practical purposes my reaction to these points is interestingly somewhat symmetrical to yours. :)
I think these are considerations that actually raise worries about Pascal’s Mugging. The probability that we’re so wrong about fundamental physics, or that I’m so wrong about what I’d value if only I knew more, seems so small that I’m not sure what to do with it.
There is also the issue that if we were so wrong, I would expect that we’re very wrong about a number of different things as well. I think the modal scenarios on which the above “limits to growth” picture is wrong is not “how we expect the future to look like, but with FTL travel” but very weird things like “we’re in a simulation”. Unknown unknowns rather than known unknowns. So my reaction to the possibility of being in such a world is not “let’s prioritize economic growth [or any other specific thing] instead”, but more like ”??? I don’t know how to think about this, so I should to a first approximation ignore it”.
Taking a step back, the place where I was coming from is: In this century, everyone might well die (or something similarly bad might happen). And it seems like there are things we can do that significantly help us survive. There are all these reasons why this might not be as significant as it seems—aliens, intelligent life re-evolving on Earth, us being in a simulation, us being super confused about what we’d value if we understood the world better, infinite ethics, etc. - but ultimately I’m going to ask myself: Am I sufficiently troubled by these possibilities to risk irrecoverable ruin? And currently I feel fairly comfortable answering this question with “no”.
Overall, this makes me think that disagreements about the limits to growth, and how confident we can be in them or their significance, is probably not the crux here. Based on the whole discussion so far, I suspect it’s more likely to be “Can sufficiently many people do sufficiently impactful things to reduce the risk of human extinction or similarly bad outcomes?”. [And at least for you specifically, perhaps “impartial altruism vs. ‘enlightened egoism’” might also play a role.]
Thanks. That is an interesting argument, and this isn’t the first time I’ve heard it, but I think I see its significance to the issue more clearly now.
I will have to think about this more. My gut reaction is: I don’t trust my ability to extrapolate out that many orders of magnitude into the future. So, yes, this is a good first-principles physics argument about the limits to growth. (Much better than the people who stop at pointing out that “the Earth is finite”). But once we’re even 10^12 away from where we are now, let alone 10^200, who knows what we’ll find? Maybe we’ll discover FTL travel (ok, unlikely). Maybe we’ll at least be expanding out to other galaxies. Maybe we’ll have seriously decoupled economic growth from physical matter: maybe value to humans is in the combinations and arrangements of things, rather than things themselves—bits, not atoms—and so we have many more orders of magnitude to play with.
If you’re not willing to apply a moral discount factor against the far future, shouldn’t we at least, at some point, apply an epistemic discount? Are we so certain about progress/growth being a brief, transient phase that we’re willing to postpone the end of it by literally the length of human civilization so far, or longer?
I think this actually does point to a legitimate and somewhat open question on how to deal with uncertainty between different ‘worldviews’. Similar to Open Phil, I’m using worldview to refer to a set of fundamental beliefs that are an entangled mix of philosophical and empirical claims and values.
E.g., suppose I’m uncertain between:
Worldview A, according to which I should prioritize based on time scales of trillions of years.
Worldview B, according to which I should prioritize based on time scales of hundreds of years.
This could be for a number of reasons: an empirical prediction that civilization is going to end after a few hundred years; ethical commitments such as pure time preference, person-affecting views, egoism, etc.; or epistemic commitments such as high-level heuristics for how to think about long time scales or situations with significant radical uncertainty.
One way to deal with this uncertainty is to put both value on a “common scale”, and then apply expected value: perhaps on worldview A, I can avert quintillions of expected deaths while on worldview B “only” a trillions lives are at stake in my decision. Even if I only have a low credence in A, after applying expected value I will then end up making decisions based just on A.
But this is not the only game in town. We might instead think of A and B as two groups of people with different interests trying to negotiate an agreement. In that case, we may have the intuition that A should make some concessions to B even if A was a much larger group, or was more powerful, or similar. This can motivate ideas such as variance normalization or the ‘parliamentary approach’.
(See more generally: normative uncertainty.)
Now, I do have views on this matter that don’t make me very sympathetic to allocating a significant chunk of my resources to, say, speeding up economic growth or other things someone concerned about next few decades might prioritize. (Both because of my views on normative uncertainty and because I’m not aware of anything sufficiently close to ‘worldview B’ that I find sufficiently plausible—these kind of worldviews from my perspective sit in too awkward a spot between impartial consequentialism and a much more ‘egoistic’, agent-relative, or otherwise nonconsequentialist perspective.)
But I do think that the most likely way that someone could convince me to, say, donate a signifcant fraction of my income to ‘progress studies’ or AMF or The Good Food Institute (etc.) would be by convincing me that actually I want to aggregate different ‘worldviews’ I find plausible in a different way. This certainly seems more likely to change my mind than an argument aiming to show that, when we take longtermism for granted, we should prioritize one of these other things.
[ETA: I forgot to add that another major consideration is that, at least on some plausible estimates and my own best guess, existential risk this century is so high—and our ability to reduce it sufficiently good—that even if I thought I should prioritize primarily based on short time scales, I might well end up prioritizing reducing x-risk anyway. See also, e.g., here.]
I think I have a candidate for a “worldview B” that some EAs may find compelling. (Edit: Actually, the thing I’m proposing also allocates some weight to trillions of years, but it differs from your “worldview A” in that nearer-term considerations don’t get swamped!) It requires a fair bit of explaining, but IMO that’s because it’s generally hard to explain how a framework differs from another framework when people are used to only thinking within a single framework. I strongly believe that if moral philosophy had always operated within my framework, the following points would be way easier to explain.
Anyway, I think standard moral-philosophical discourse is a bit dumb in that it includes categories without clear meaning. For instance, the standard discourse talks about notions like, “What’s good from a universal point of view,” axiology/theory of value, irreducibly normative facts, etc.
The above notions fail at reference – they don’t pick out any unambiguously specified features of reality or unambiguously specified sets from the option space of norms for people/agents to adopt.
You seem to be unexcited about approaches to moral reasoning that are more “more ‘egoistic’, agent-relative, or otherwise nonconsequentialist” than the way you think moral reasoning should be done. Probably, “the way you think moral reasoning should be done” is dependent on some placeholder concepts like “axiology” or “what’s impartially good” that would have to be defined crisply if we wanted to completely solve morality according to your preferred evaluation criteria. Consider the possibility that, if we were to dig into things and formalize your desired criteria, you’d realize that there’s a sense in which any answer to population ethics has to be at least a little bit ‘egoistic’ or agent-relative. Would this weaken your intuitions that person-affecting views are unattractive?
I’ll try to elaborate now why I believe “There’s a sense in which any answer to population ethics has to be at least a little bit ‘egoistic’ or agent-relative.”
Basically, I see a tension between “there’s an objective axiology” and “people have the freedom to choose life goals that represent their idiosyncrasies and personal experiences.” If someone claims there’s an objective axiology, they’re implicitly saying that anyone who doesn’t adopt an optimizing mindset around successfully scoring “utility points” according to that axiology is making some kind of mistake / isn’t being optimally rational. They’re implicitly saying it wouldn’t make sense for people (at least for people who are competent/organized enough to reliably pursue long-term goals) to live their lives in pursuit of anything other than “pursuing points according to the one true axiology.” Note that this is a strange position to adopt! Especially when we look at the diversity between people, what sorts of lives they find the most satisfying (e.g., differences between investment bankers, MMA fighters, novelists, people who open up vegan bakeries, people for whom family+children means everything, those EA weirdos, etc.), it seems strange to say that all these people should conclude that they ought to prioritize surviving until the Singularity so as to get the most utility points overall. To say that everything before that point doesn’t really matter by comparison. To say that and any romantic relationships people enter are only placeholders until something better comes along with experience-machine technology.
Once you give up on the view that there’s an objectively correct axiology (as well as the view that you ought to follow a wager for the possibility of it), all of the above considerations (“people differ according to how they’d ideally want to score their own lives”) will jump out at you, no longer suppressed by this really narrow and fairly weird framework of “How can we subsume all of human existence into utility points and have debates on whether we should adopt ‘totalism’ toward the utility points, or come up with a way to justify taking a person-affecting stance.”
There’s a common tendency in EA to dismiss the strong initial appeal of person-affecting views because there’s no elegant way to incorporate them into the moral realist “utility points” framework. But one person’s modus ponens is another’s modus tollens: Maybe if your framework can’t incorporate person-affecting intuitions, that means there’s something wrong with the framework.
I suspect that what’s counterintuitive about totalism in population ethics is less about the “total”/”everything” part of it, and more related to what’s counterintuitive about “utility points” (i.e., the postulate that there’s an objective, all-encompassing axiology). I’m pretty convinced that something like person-affecting views, though obviously conceptualized somewhat differently (since we’d no longer be assuming moral realism) intuitively makes a lot of sense.
Here’s how that would work (now I’ll describe the new proposal for how to do ethical reasoning):
Utility is subjective. What’s good for someone is what they deem good for themselves by their lights, the life goals for which they get up in the morning and try doing their best.
A beneficial outcome for all of humanity could be defined by giving individual humans the possibility to reflect about their goals in life under ideal conditions to then implement some compromise (e.g., preference utilitarianism, or – probably better – a moral parliament framework) to make everyone really happy with the outcome.
Preference utilitarianism or the moral parliament framework would concern people who already exist – these frameworks’ population-ethical implications are indirectly specified, in the sense that they depend on what the people on earth actually want. Still, people individually have views about how they want the future to go. Parents may care about having more children, many people may care about intelligent earth-originating life not going extinct, some people may care about creating as much hedonium as possible in the future, etc.
In my worldview, I conceptualize the role of ethics as two-fold:
(1) Inform people about the options for wisely chosen subjective life goals
--> This can include life goals inspired by a desire to do what’s “most moral” / “impartial” / “altruistic,” but it can also include more self-oriented life goals
(2) Provide guidance for how people should deal with the issue that not everyone shares the same life goals
Population ethics, then, is a subcategory of (1). Assuming you’re looking for an altruistic life goal rather than a self-oriented one, you’re faced with the question of whether your notion of “altruism” includes bringing happy people into existence. No matter what you say, your answer to population ethics will be, in a weak sense, ‘egoistic’ or agent-relative, simply because you’re not answering “What’s the right population ethics for everyone.” You’re just answering, “What’s my vote for how to allocate future resources.” (And you’d be trying to make your vote count in an altruistic/impartial way – but you don’t have full/single authority on that.)
If moral realism is false, notions like “optimal altruism” or “What’s impartially best” are under-defined. Note that under-definedness doesn’t mean “anything goes” – clearly, altruism has little to do with sorting pebbles or stacking cheese on the moon. “Altruism is under-defined” just means that there are multiple ‘good’ answers.
Finally, here’s the “worldview B” I promised to introduce:
Within the anti-realist framework I just outlined, altruistically motivated people have to think about their preferences for what to do with future resources. And they can – perfectly coherently – adopt the view: “Because I have person-affecting intuitions, I don’t care about creating new people; instead, I want to focus my ‘altruistic’ caring energy on helping people/beings that exist regardless of my choices. I want to help them by fulfilling their life goals, and by reducing the suffering of sentient beings that don’t form world-models sophisticated enough to qualify for ‘having life goals’.”
Note that a person who thinks this may end up caring a great deal about humans not going extinct. However, unlike in the standard framework for population ethics, she’d care about this not because she thinks it’s impartially good for the future to contain lots of happy people. Instead, she thinks it’s good from the perspective of the life goals of specific, existing others, for the future to go on and contain good things.
Is that really such a weird view? I really don’t think so, myself. Isn’t it rather standard population-ethical discourse that’s a bit weird?
Edit: (Perhaps somewhat related: my thoughts on the semantics of what it could mean that ‘pleasure is good’. My impression is that some people think there’s an objectively correct axiology because they find experiential hedonism compelling in a sort of ‘conceptual’ way, which I find very dubious.)
I hope to have time to read your comment and reply in more detail later, but for now just one quick point because I realize my previous comment was unclear:
I am actually sympathetic to an “‘egoistic’, agent-relative, or otherwise nonconsequentialist perspective”. I think overall my actions are basically controlled by some kind of bargain/compromise between such a perspective (or perhaps perspectives) and impartial consequentialism.
The point is just that, from within these other perspectives, I happen to not be that interested in “impartially maximize value over the next few hundres of years”. I endorse helping my friends, maybe I endorse volunteering in a soup kitchen or something like that; I also endorse being vegetarian or donating to AMF, or otherwise reducing global poverty and inequality (and yes, within these ‘causes’ I tend to prefer larger over smaller effects); I also endorse reducing far-future s-risks and current wild animal suffering, but not quite as much. But this is all more guided by responding to reactive attitudes like resentment and indignation than by any moral theory. It looks a lot like moral particularism, and so it’s somewhat hard to move me with arguments in that domain (it’s not impossible, but it would require something that’s more similar to psychotherapy or raising a child or “things the humanities do” than to doing analytic philosophy).
So this roughly means that if you wanted to convince me to do X, then you either need to be “lucky” that X is among the things I happen to like for idiosyncratic reasons—or X needs to look like a priority from an impartially consequentialist outlook.
It sounds like we both agree that when it comes to reflecting about what’s important to us, there should maybe be a place for stuff like “(idiosyncratic) reactive attitudes,” “psychotherapy or raising a child or ‘things the humanities do’” etc.
Your view seems to be that you have two modes of moral reasoning: The impartial mode of analytic philosophy, and the other thing (subjectivist/particularist/existentialist).
My point with my long comment earlier is basically the following:
The separation between these two modes is not clear!
I’d argue that what you think of the “impartial mode” has some clear-cut applications, but it’s under-defined in some places, so different people will gravitate toward different ways of approaching the under-defined parts, based on using appeals that you’d normally place in the subjectivist/particularist/existentialist mode.
Specifically, population ethics is under-defined. (It’s also under-defined how to extract “idealized human preferences” from people like my parents, who aren’t particularly interested in moral philosophy or rationality.)
I’m trying to point out that if you fully internalized that population ethics is going to be under-defined no matter what, you then have more than one option for how to think about it. You no longer have to think of impartiality criteria and “never violating any transitivity axioms” as the only option. You can think of population ethics more like this: Existing humans have a giant garden (the ‘cosmic commons’) that is at risk of being burnt, and they can do stuff with it if they manage to preserve it, and people have different preferences about what definitely should or shouldn’t be done with that garden. You can look for the “impartially best way to make use of the garden” – or you could look at how other people want to use the garden and compromise with them, or look for “meta-principles” that guide who gets to use which parts of the garden (and stuff that people definitely shouldn’t do, e.g., no one should shit in their part of the garden), without already having a fixed vision for how the garden has to look like at the end, once it’s all made use of. Basically, I’m saying that “knowing from the very beginning exactly what the ‘best garden’ has to look like, regardless of the gardening-related preferences of other humans, is not a forced move (especially because there’s no universally correct solution anyway!). You’re very much allowed to think of gardening in a different, more procedural and ‘particularist’ way.”
Thanks! I think I basically agree with everything you say in this comment. I’ll need to read your longer comment above to see if there is some place where we do disagree regarding the broadly ‘metaethical’ level (it does seem clear we land on different object-level views/preferences).
In particular, while I happen to like a particular way of cashing out the “impartial consequentialist” outlook, I (at least on my best-guess view on metaethics) don’t claim that my way is the only coherent or consistent way, or that everyone would agree with me in the limit of ideal reasoning, or anything like that.
Thanks for sharing your reaction! I actually agree with some of it:
I do think it’s good to retain some skepticism about our ability to understand the relevant constraints and opportunities that civilization would face in millions or billions of years. I’m not 100% confident in the claims from my previous comment.
In particular, I have non-zero credence in views that decouple moral value from physical matter. And on such views it would be very unclear what limits to growth we’re facing (if any).
But if ‘moral value’ is even roughly what I think it is (in particular, requires information processing), then this seems similarly unlikely as FTL travel being possible: I’m not a physicist, but my rough understanding is that there is only so much computation you can do with a given amount of energy or negentropy or whatever the relevant quantity is.
It could still turn out that we’re wrong about how information processing relates to physics (relatedly, look what some current longtermists were interested in during their early days ;)), or about how value relates to information processing. But this also seems very unlikely to me.
However, for practical purposes my reaction to these points is interestingly somewhat symmetrical to yours. :)
I think these are considerations that actually raise worries about Pascal’s Mugging. The probability that we’re so wrong about fundamental physics, or that I’m so wrong about what I’d value if only I knew more, seems so small that I’m not sure what to do with it.
There is also the issue that if we were so wrong, I would expect that we’re very wrong about a number of different things as well. I think the modal scenarios on which the above “limits to growth” picture is wrong is not “how we expect the future to look like, but with FTL travel” but very weird things like “we’re in a simulation”. Unknown unknowns rather than known unknowns. So my reaction to the possibility of being in such a world is not “let’s prioritize economic growth [or any other specific thing] instead”, but more like ”??? I don’t know how to think about this, so I should to a first approximation ignore it”.
Taking a step back, the place where I was coming from is: In this century, everyone might well die (or something similarly bad might happen). And it seems like there are things we can do that significantly help us survive. There are all these reasons why this might not be as significant as it seems—aliens, intelligent life re-evolving on Earth, us being in a simulation, us being super confused about what we’d value if we understood the world better, infinite ethics, etc. - but ultimately I’m going to ask myself: Am I sufficiently troubled by these possibilities to risk irrecoverable ruin? And currently I feel fairly comfortable answering this question with “no”.
Overall, this makes me think that disagreements about the limits to growth, and how confident we can be in them or their significance, is probably not the crux here. Based on the whole discussion so far, I suspect it’s more likely to be “Can sufficiently many people do sufficiently impactful things to reduce the risk of human extinction or similarly bad outcomes?”. [And at least for you specifically, perhaps “impartial altruism vs. ‘enlightened egoism’” might also play a role.]