We have to make judgment calls about how to structure our reflection strategy. Making those judgment calls already gets us in the business of forming convictions. So, if we are qualified to do that (in “pre-reflection mode,” setting up our reflection procedure), why can’t we also form other convictions similarly early?
I’m very confused/uncertain about many philosophical topics that seem highly relevant to morality/axiology, such as the nature of consciousness and whether there is such a thing as “measure” or “reality fluid” (and if so what is it based on). How can it be right or safe to form moral convictions under such confusion/uncertainty?
It seems quite plausible that in the future I’ll have access to intelligence-enhancing technologies that will enable me to think of many new moral/philosophical arguments and counterarguments, and/or to better understand existing ones. I’m reluctant to form any convictions until that happens (or the hope of it ever happening becomes very low).
Also I’m not sure how I would form object-level moral convictions even if I wanted to. No matter what I decide today, why wouldn’t I change my mind if I later hear a persuasive argument against it? The only thing I can think of is to hard-code something to prevent my mind being changed about a specific idea, or to prevent me from hearing or thinking arguments against a specific idea, but that seems like a dangerous hack that could mess up my entire belief system.
Therefore, it seems reasonable/defensible to think of oneself as better positioned to form convictions about object-level morality (in places where we deem it safe enough).
Do you have any candidates for where you deem it safe enough to form object-level moral convictions?
Also I’m not sure how I would form object-level moral convictions even if I wanted to. No matter what I decide today, why wouldn’t I change my mind if I later hear a persuasive argument against it? The only thing I can think of is to hard-code something to prevent my mind being changed about a specific idea, or to prevent me from hearing or thinking arguments against a specific idea, but that seems like a dangerous hack that could mess up my entire belief system.
I don’t think of “convinctions” as anywhere near as strong as hard-coding something. “Convictions,” to me, is little more than “whatever makes someone think that they’re very confident they won’t change their mind.” Occasionally, someone will change their minds about stuff even after they said it’s highly unlikely. (If this happens too often, one has a problem with calibration, and that would be bad by the person’s own lights, for obvious reasons. It seems okay/fine/to-be-expected for this to happen infrequently.)
I say “litte more than [...]” rather than “is exactly [...]” because convictions are things that matter in the context of one’s life goals. As such, there’s a sense of importance attached to them, which will make people more concerned than usual about changing their views for reasons they wouldn’t endorse (while still staying open for low-likelihood ways of changing their minds through a process they endorse!). (Compare this to: “I find it very unlikely that I’d ever come to like the taste of beetroot.” If I did change my mind on this later because I joined a community where liking beetroot is seen as very cool, and I get peer-pressured into trying it a lot and trying to form positive associations with it when I eat it, and somehow this ends up working and I actually come to like it, I wouldn’t consider this to be as much of a tragedy as if a similar thing happened with my moral convictions.)
Also I’m not sure how I would form object-level moral convictions even if I wanted to.
Some people can’t help it. I think this has a lot to do with reasoning styles. Since you’re one of the people on LW/EA forum who place the most value on figuring out things related to moral uncertainty (and metaphilosophy), it seems likely that you’re more towards the far end of the spectrum of reasoning styles around this. (It also seems to me that you have a point, that these issues are indeed important/underappreciated – after all, I wrote a book-length sequence on something that directly bears on these questions, but coming from somewhere more towards the other end of the spectrum of reasoning styles.)
I’m very confused/uncertain about many philosophical topics that seem highly relevant to morality/axiology, such as the nature of consciousness and whether there is such a thing as “measure” or “reality fluid” (and if so what is it based on). How can it be right or safe to form moral convictions under such confusion/uncertainty?
Those two are some good examples of things that I imagine most or maybe even all people* are still confused about. (I remember also naming consciousness/”Which computations do I care about?” in a discussion we had long ago on the same topic, as an example of something where I’d want more reflection.)
*(I don’t say “all people” outright because I find it arrogant when people who don’t themselves understand a topic declare that no one can understand it – for all I know, maybe Brian Tomasik’s grasp on consciousness is solid enough that he could form convictions about certain aspects of it, if forming convictions there were something that his mind felt drawn to.)
So, especially since issues related to consciousness and reality don’t seem too urgent for us to decide on, it seems like the most sensible option here, for people like you and me at least, is to defer.
Do you have any candidates for where you deem it safe enough to form object-level moral convictions?
Yeah; I think there are many other issues in morality/in “What are my goals?” that are independent of the two areas of confusion you brought up. We can discuss whether forming convictions early in those independent areas (and in particular, in areas where narrowing down our uncertainty would already be valuable** in the near term) is a defensible thing to do. (Obviously, it will depend on the person: it requires having a solid grasp of the options and the “option space” to conclude that you’re unlikely to encounter view-shaking arguments or “better conceptualizations of what the debate is even about” in the future.)
**If someone buys into ECL, making up one’s mind on one’s values becomes less relevant because the best action according to ECL is to work on your comparative advantage among interventions that are valuable from the perspective of ECL-inclined, highly-goal-driven people around you. (One underlying assumption here is that, since we don’t have much info about corners of the multiverse that look too different from ours, it makes sense to focus on cooperation partners that live in worlds relevantly similar to ours, i.e., worlds that contain the same value systems we see here among present-day humans.) Still, “buying into ECL” already involves having come to actionably-high confidence on some tricky decision theory questions. I don’t think there’s a categorical difference between “values” and “decision theory,” so having confidence in ECL-allowing decision theories already involves having engaged in some degree of “forming convictions.”
The most prominent areas I can think of where I think it makes sense for some people to form convictions early:
ECL pre-requirements (per the points in the paragraph above).
Should I reason about morality in ways that match my sequence takeaways here, or should I instead reason more like some moral realists would think that we should reason?
Should I pursue self-oriented values or devote my life to altruism (or do something in between)?
What do I think about population ethics and specifically the question of “How likely is it that I would endorse a broadly ‘downside-focused’ morality after long reflection?”
These questions all have implications for how we should act in the near future. Furthermore, they’re the sort of questions where I think it’s possible to get a good enough grasp on the options and option space to form convictions early.
Altruism vs self-orientedness seems like the most straightforward one. You gotta choose something eventually (including the option of going with a mix), and you may as well choose now because the question is ~as urgent as it gets, and it’s not like the features that make this question hard to decide on have much do with complicated philosophical arguments or future-technology-requiring new insights. (This isn’t to say that philosophical arguments have no bearing on the question – e.g., famine affluence and morality, or Parfit on personal identity, contain arguments that some people might find unexpectedly compelling, so there’s something that’s lost if someone were to make up their mind without encountering those arguments. Or maybe some unusually privileged people would find themselves surprised if they read specific accounts of how hard life can be for non-privileged people, or if they became personally acquainted with some of these hardships. But all these things seem like things that a person can investigate right here and now, without the need to wait for future superintelligent AI philosophy advisors. [Also, some of these seem like they may not just be “new considerations,” but actually “transformative experiences” that change you into a different person. E.g., encountering someone who is facing hardship and you help them and you feel very fulfilled can become the seed you form your altruistic identity around.])
Next, for moral realism vs anti-realism (which is maybe more about “forming convictions on metaphilosophy” than about direct values, but just like with “decision theory vs values,” I think “metaphilosophy vs values” is also a fluid/fuzzy distinction), I see it similarly. The issue has some urgent implications for EAs to decide on (though I don’t think of it as the most important question), and there IMO are some good reasons to expect that future insights won’t make it significantly easier/won’t change the landscape in which we have to find our decision. Namely, the argument here is that this is a question that already permeates all the ways in which one would go about doing further reflection. You need to have some kind of reasoning framework to get started with thinking about values, so you can’t avoid choosing. There’s no “by default safe option.” As I argued in my sequence, thinking that there’s a committing wager for non-naturalist moral realism only works if you’ve formed the conviction I labelled “metaethical fanaticism” (see here), while the wager for moral naturalism (see the discussion in this post we’re here commenting on) isn’t strong enough to do all the work on its own.
Some people will likely object at this point that moral realism vs anti-realism is not independent from questions of consciousness. Some moral realists place a lot of weight on consciousness realism and the claim that consciousness gives us direct access to moral value. (This view tends to be associated with moral realist hedonist axiology, or, in David Pearce’s case, with moral realist negative utilitarianism.) I addressed this claim here and found it unconvincing.
Lastly, population ethics might be the most controversial example, but I think it’s fairly easy to see that there won’t be a new consideration that will sway all sophisticated reasoners towards the same endpoint.
[Edit: BTW, when I speak of “forming convinctions on population ethics,” I don’t necessarily mean some super specific philosophical theory backed up with an academic paper or long blogpost. I mean more things like having strong confidence in broad features of a class of views. The more specific is also possible/defensible, but I wouldn’t want you to think of “I’m now a negative utilitarian” or “I’m now a hedonistic classical utilitarian” as the most central example of forming some convictions early.]
Firstly, there are the famous impossibility theorems. Personally, I am concerned that the frameworks in which people derive impossibility theorems often bake in non-obvious assumptions, so that they exclude options where we would come to think about population ethics from within a different ontology (meaning, with a different conceptual repertoire and different conceptualizations of “What question are we trying to answer here, what makes for a good solution, etc.?”). However, even within my moral anti-realism-informed framing of the option space in population ethics (see here), one eventually runs into the standard forking paths and dilemmas, and I’ve observed that people have vastly different strong intuitions on fundamental aspects of population ethics, such as on the question of “is non-existence a problem?,” and that means that people will in practice end up taking different personal stances on population ethics, and I don’t see where they’d be going wrong. Next to impossibility theorems that show us that a solution is unlikely to come about, I think we can also gesture at this from the other side, seeing why it is unlikely to come about. I think that population ethics has this reputation of being vexed/difficulty because it is “stretching the domain of our most intuitive moral principles to a point where things become under-defined.”
Fundamentally, I conceptualize ethics as being about others’ interests. (Dismantling Hedonism-inspired Moral Realism explains why I don’t see ethics as being about experiences. Against Irreducible Normativity explains why I don’t see use in conceptualizing ethics as “irreducible,” as being about things we can’t express in non-normative terminology.) So, something like preference utilitarianism feels like a pretty good answer to “how should a maximally wise and powerful god/shepherd AI take care of a fixed population of humans?.” However, once we move away from having a fixed population of existing humans, the interests of not-yet-existing minds are now underdefined, in two(!) ways even: (1) It’s underdefined how many new others there will be. (2) It’s underdefined who the others will be. E.g., some conceivable new happy minds will be very grateful for their existence, but others will be like “I’m happy and that’s nice, but if I hadn’t been borne, that would be okay too.” The underlying intuitions behind preference utilitarianism (the reasons why preference utilitarianism seems compelling in fixed-population contexts, namely, that it gives everyone what they want and care about) no longer help us decide in tricky population-ethics dilemmas. That suggests inherent under-definedness.
And yet, population ethics is urgently relevant to many aspects of effective altruism. So, people are drawn to thinking deeply about it. And some people will form convinctions in the course of thinking about it. That’s what happens, empirically. So, to disagree, you’d have to explain what it is that the people to whom this happens are doing wrong. You might object with, “Why form confident views about anything that already you know (or suspect) that it won’t be backed by a consensus of ideal reasoners?”
My dialogue in one of the last sections of the post we’re here commenting on is relevant to that (quoting it here in full for ease of having everything in one location):
Critic: Why would moral anti-realists bother to form well-specified moral views? If they know that their motivation to act morally points in an arbitrary direction, shouldn’t they remain indifferent about the more contested aspects of morality? It seems that it’s part of the meaning of “morality” that this sort of arbitrariness shouldn’t happen.
Me: Empirically, many anti-realists do bother to form well-specified moral views. We see many examples among effective altruists who self-identify as moral anti-realists. That seems to be what people’s motivation often does in these circumstances.
Critic: Point taken, but I’m saying maybe they shouldn’t? At the very least, I don’t understand why they do it.
Me: You said that it’s “part of the meaning of morality” that arbitrariness “shouldn’t happen.” That captures the way moral non-naturalists think of morality. But in the moral naturalism picture, it seems perfectly coherent to consider that morality might be under-defined (or “indefinable”). If there are several defensible ways to systematize a target concept like “altruism/doing good impartially,” you can be indifferent between all those ways or favor one of them. Both options seem possible.
Critic: I understand being indifferent in the light of indefinability. If the true morality is under-defined, so be it. That part seems clear. What I don’t understand is favoring one of the options. Can you explain to me the thinking of someone who self-identifies as a moral anti-realist yet has moral convictions in domains where they think that other philosophically sophisticated reasoners won’t come to share them?
Me: I suspect that your beliefs about morality are too primed by moral realist ways of thinking. If you internalized moral anti-realism more, your intuitions about how morality needs to function could change.
Consider the concept of “athletic fitness.” Suppose many people grew up with a deep-seated need to study it to become ideally athletically fit. At some point in their studies, they discover that there are multiple options to cash out athletic fitness, e.g., the difference between marathon running vs. 100m-sprints. They may feel drawn to one of those options, or they may be indifferent.
Likewise, imagine that you became interested in moral philosophy after reading some moral arguments, such as Singer’s drowning child argument in Famine, Affluence and Morality. You developed the motivation to act morally as it became clear to you that, e.g., spending money on poverty reduction ranks “morally better” (in a sense that you care about) than spending money on a luxury watch. You continue to study morality. You become interested in contested subdomains of morality, like theories of well-being or population ethics. You experience some inner pressure to form opinions in those areas because when you think about various options and their implications, your mind goes, “Wow, these considerations matter.” As you learn more about metaethics and the option space for how to reason about morality, you begin to think that moral anti-realism is most likely true. In other words, you come to believe that there are likely different systematizations of “altruism/doing good impartially” that individual philosophically sophisticated reasoners will deem defensible. At this point, there are two options for how you might feel: either you’ll be undecided between theories, or you find that a specific moral view deeply appeals to you.
In the story I just described, your motivation to act morally comes from things that are very “emotionally and epistemically close” to you, such as the features of Peter Singer’s drowning child argument. Your moral motivation doesn’t come from conceptual analysis about “morality” as an irreducibly normative concept. (Some people do think that way, but this isn’t the story here!) It also doesn’t come from wanting other philosophical reasoners to necessarily share your motivation. Because we’re discussing a naturalist picture of morality, morality tangibly connects to your motivations. You want to act morally not “because it’s moral,” but because it relates to concrete things like helping people, etc. Once you find yourself with a moral conviction about something tangible, you don’t care whether others would form it as well.
I mean, you would care if you thought others not sharing your particular conviction was evidence that you’re making a mistake. If moral realism was true, it would be evidence of that. However, if anti-realism is indeed correct, then it wouldn’t have to weaken your conviction.
Critic: Why do some people form convictions and not others?
Me: It no longer feels like a choice when you see the option space clearly. You either find yourself having strong opinions on what to value (or how to morally reason), or you don’t.
The point I’m trying to make here is that people will have strong path-defining intuitions about population ethics for similar reasons to why they were strongly moved by the drowning child argument. When they contemplate why they get up in the morning, they might either find themselves motivated to make happy people, or they don’t. Just like some people find the drowning child argument compelling as a reason to re-orient a lot of their lives, while others don’t. It’s the same type of “where motivation to form life goals comes from.” See also my post here, in particular the subsection on “planning mode” that describes how I believe that people decide on adopting an identity around some specific life goal. (And the primary point there is that it’s not all that different from how people make less-high-stakes decisions, such as what job to take or whether to go skiing on a weekend vs stay cozily at home.)
One underlying assumption in my thinking here is that when people say they have a confident view on population ethics because [insert some complicated philosophical argument], it’s often that the true reason they have that view*** is some fundamental intuition about some pretty straightforward thought experiment, and the theory surrounding it is more like “extra furnishing of that intuition.”
***Instead of saying “that view,” I should rather say “a view that has implications that place it in this broad family of views (e.g., ‘downside-focused’ vs not).” For people to come up with highly specific and bullet-biting views like “negative utilitarianism” or “classical hedonistic utilitarianism,” they do have to engage in a lot of abstract theoretical reasoning. However, why is someone drawn to theories that say it’s important to create happy people? I feel like you can often track this down to some location in the chain of arguments where there’s a pretty straightforward thought experiment and the person goes “this is where I stand my ground, I won’t accept that.” And people stand their ground at very different points, and sometimes you have a dilemma where someone is like “I always found the left path intuitive” and the other person is like “the left path is absolutely horrible, and I believe that more confidently than I’d believe the merits of more abstract arguments.”
I’m very confused/uncertain about many philosophical topics that seem highly relevant to morality/axiology, such as the nature of consciousness and whether there is such a thing as “measure” or “reality fluid” (and if so what is it based on). How can it be right or safe to form moral convictions under such confusion/uncertainty?
It seems quite plausible that in the future I’ll have access to intelligence-enhancing technologies that will enable me to think of many new moral/philosophical arguments and counterarguments, and/or to better understand existing ones. I’m reluctant to form any convictions until that happens (or the hope of it ever happening becomes very low).
Also I’m not sure how I would form object-level moral convictions even if I wanted to. No matter what I decide today, why wouldn’t I change my mind if I later hear a persuasive argument against it? The only thing I can think of is to hard-code something to prevent my mind being changed about a specific idea, or to prevent me from hearing or thinking arguments against a specific idea, but that seems like a dangerous hack that could mess up my entire belief system.
Do you have any candidates for where you deem it safe enough to form object-level moral convictions?
Thank you for engaging with my post!! :)
I don’t think of “convinctions” as anywhere near as strong as hard-coding something. “Convictions,” to me, is little more than “whatever makes someone think that they’re very confident they won’t change their mind.” Occasionally, someone will change their minds about stuff even after they said it’s highly unlikely. (If this happens too often, one has a problem with calibration, and that would be bad by the person’s own lights, for obvious reasons. It seems okay/fine/to-be-expected for this to happen infrequently.)
I say “litte more than [...]” rather than “is exactly [...]” because convictions are things that matter in the context of one’s life goals. As such, there’s a sense of importance attached to them, which will make people more concerned than usual about changing their views for reasons they wouldn’t endorse (while still staying open for low-likelihood ways of changing their minds through a process they endorse!). (Compare this to: “I find it very unlikely that I’d ever come to like the taste of beetroot.” If I did change my mind on this later because I joined a community where liking beetroot is seen as very cool, and I get peer-pressured into trying it a lot and trying to form positive associations with it when I eat it, and somehow this ends up working and I actually come to like it, I wouldn’t consider this to be as much of a tragedy as if a similar thing happened with my moral convictions.)
Some people can’t help it. I think this has a lot to do with reasoning styles. Since you’re one of the people on LW/EA forum who place the most value on figuring out things related to moral uncertainty (and metaphilosophy), it seems likely that you’re more towards the far end of the spectrum of reasoning styles around this. (It also seems to me that you have a point, that these issues are indeed important/underappreciated – after all, I wrote a book-length sequence on something that directly bears on these questions, but coming from somewhere more towards the other end of the spectrum of reasoning styles.)
I’m very confused/uncertain about many philosophical topics that seem highly relevant to morality/axiology, such as the nature of consciousness and whether there is such a thing as “measure” or “reality fluid” (and if so what is it based on). How can it be right or safe to form moral convictions under such confusion/uncertainty?
Those two are some good examples of things that I imagine most or maybe even all people* are still confused about. (I remember also naming consciousness/”Which computations do I care about?” in a discussion we had long ago on the same topic, as an example of something where I’d want more reflection.)
*(I don’t say “all people” outright because I find it arrogant when people who don’t themselves understand a topic declare that no one can understand it – for all I know, maybe Brian Tomasik’s grasp on consciousness is solid enough that he could form convictions about certain aspects of it, if forming convictions there were something that his mind felt drawn to.)
So, especially since issues related to consciousness and reality don’t seem too urgent for us to decide on, it seems like the most sensible option here, for people like you and me at least, is to defer.
Yeah; I think there are many other issues in morality/in “What are my goals?” that are independent of the two areas of confusion you brought up. We can discuss whether forming convictions early in those independent areas (and in particular, in areas where narrowing down our uncertainty would already be valuable** in the near term) is a defensible thing to do. (Obviously, it will depend on the person: it requires having a solid grasp of the options and the “option space” to conclude that you’re unlikely to encounter view-shaking arguments or “better conceptualizations of what the debate is even about” in the future.)
**If someone buys into ECL, making up one’s mind on one’s values becomes less relevant because the best action according to ECL is to work on your comparative advantage among interventions that are valuable from the perspective of ECL-inclined, highly-goal-driven people around you. (One underlying assumption here is that, since we don’t have much info about corners of the multiverse that look too different from ours, it makes sense to focus on cooperation partners that live in worlds relevantly similar to ours, i.e., worlds that contain the same value systems we see here among present-day humans.) Still, “buying into ECL” already involves having come to actionably-high confidence on some tricky decision theory questions. I don’t think there’s a categorical difference between “values” and “decision theory,” so having confidence in ECL-allowing decision theories already involves having engaged in some degree of “forming convictions.”
The most prominent areas I can think of where I think it makes sense for some people to form convictions early:
ECL pre-requirements (per the points in the paragraph above).
Should I reason about morality in ways that match my sequence takeaways here, or should I instead reason more like some moral realists would think that we should reason?
Should I pursue self-oriented values or devote my life to altruism (or do something in between)?
What do I think about population ethics and specifically the question of “How likely is it that I would endorse a broadly ‘downside-focused’ morality after long reflection?”
These questions all have implications for how we should act in the near future. Furthermore, they’re the sort of questions where I think it’s possible to get a good enough grasp on the options and option space to form convictions early.
Altruism vs self-orientedness seems like the most straightforward one. You gotta choose something eventually (including the option of going with a mix), and you may as well choose now because the question is ~as urgent as it gets, and it’s not like the features that make this question hard to decide on have much do with complicated philosophical arguments or future-technology-requiring new insights. (This isn’t to say that philosophical arguments have no bearing on the question – e.g., famine affluence and morality, or Parfit on personal identity, contain arguments that some people might find unexpectedly compelling, so there’s something that’s lost if someone were to make up their mind without encountering those arguments. Or maybe some unusually privileged people would find themselves surprised if they read specific accounts of how hard life can be for non-privileged people, or if they became personally acquainted with some of these hardships. But all these things seem like things that a person can investigate right here and now, without the need to wait for future superintelligent AI philosophy advisors. [Also, some of these seem like they may not just be “new considerations,” but actually “transformative experiences” that change you into a different person. E.g., encountering someone who is facing hardship and you help them and you feel very fulfilled can become the seed you form your altruistic identity around.])
Next, for moral realism vs anti-realism (which is maybe more about “forming convictions on metaphilosophy” than about direct values, but just like with “decision theory vs values,” I think “metaphilosophy vs values” is also a fluid/fuzzy distinction), I see it similarly. The issue has some urgent implications for EAs to decide on (though I don’t think of it as the most important question), and there IMO are some good reasons to expect that future insights won’t make it significantly easier/won’t change the landscape in which we have to find our decision. Namely, the argument here is that this is a question that already permeates all the ways in which one would go about doing further reflection. You need to have some kind of reasoning framework to get started with thinking about values, so you can’t avoid choosing. There’s no “by default safe option.” As I argued in my sequence, thinking that there’s a committing wager for non-naturalist moral realism only works if you’ve formed the conviction I labelled “metaethical fanaticism” (see here), while the wager for moral naturalism (see the discussion in this post we’re here commenting on) isn’t strong enough to do all the work on its own.
Some people will likely object at this point that moral realism vs anti-realism is not independent from questions of consciousness. Some moral realists place a lot of weight on consciousness realism and the claim that consciousness gives us direct access to moral value. (This view tends to be associated with moral realist hedonist axiology, or, in David Pearce’s case, with moral realist negative utilitarianism.) I addressed this claim here and found it unconvincing.
Lastly, population ethics might be the most controversial example, but I think it’s fairly easy to see that there won’t be a new consideration that will sway all sophisticated reasoners towards the same endpoint.
[Edit: BTW, when I speak of “forming convinctions on population ethics,” I don’t necessarily mean some super specific philosophical theory backed up with an academic paper or long blogpost. I mean more things like having strong confidence in broad features of a class of views. The more specific is also possible/defensible, but I wouldn’t want you to think of “I’m now a negative utilitarian” or “I’m now a hedonistic classical utilitarian” as the most central example of forming some convictions early.]
Firstly, there are the famous impossibility theorems. Personally, I am concerned that the frameworks in which people derive impossibility theorems often bake in non-obvious assumptions, so that they exclude options where we would come to think about population ethics from within a different ontology (meaning, with a different conceptual repertoire and different conceptualizations of “What question are we trying to answer here, what makes for a good solution, etc.?”). However, even within my moral anti-realism-informed framing of the option space in population ethics (see here), one eventually runs into the standard forking paths and dilemmas, and I’ve observed that people have vastly different strong intuitions on fundamental aspects of population ethics, such as on the question of “is non-existence a problem?,” and that means that people will in practice end up taking different personal stances on population ethics, and I don’t see where they’d be going wrong. Next to impossibility theorems that show us that a solution is unlikely to come about, I think we can also gesture at this from the other side, seeing why it is unlikely to come about. I think that population ethics has this reputation of being vexed/difficulty because it is “stretching the domain of our most intuitive moral principles to a point where things become under-defined.”
Fundamentally, I conceptualize ethics as being about others’ interests. (Dismantling Hedonism-inspired Moral Realism explains why I don’t see ethics as being about experiences. Against Irreducible Normativity explains why I don’t see use in conceptualizing ethics as “irreducible,” as being about things we can’t express in non-normative terminology.) So, something like preference utilitarianism feels like a pretty good answer to “how should a maximally wise and powerful god/shepherd AI take care of a fixed population of humans?.” However, once we move away from having a fixed population of existing humans, the interests of not-yet-existing minds are now underdefined, in two(!) ways even:
(1) It’s underdefined how many new others there will be.
(2) It’s underdefined who the others will be. E.g., some conceivable new happy minds will be very grateful for their existence, but others will be like “I’m happy and that’s nice, but if I hadn’t been borne, that would be okay too.”
The underlying intuitions behind preference utilitarianism (the reasons why preference utilitarianism seems compelling in fixed-population contexts, namely, that it gives everyone what they want and care about) no longer help us decide in tricky population-ethics dilemmas. That suggests inherent under-definedness.
And yet, population ethics is urgently relevant to many aspects of effective altruism. So, people are drawn to thinking deeply about it. And some people will form convinctions in the course of thinking about it. That’s what happens, empirically. So, to disagree, you’d have to explain what it is that the people to whom this happens are doing wrong. You might object with, “Why form confident views about anything that already you know (or suspect) that it won’t be backed by a consensus of ideal reasoners?”
My dialogue in one of the last sections of the post we’re here commenting on is relevant to that (quoting it here in full for ease of having everything in one location):
The point I’m trying to make here is that people will have strong path-defining intuitions about population ethics for similar reasons to why they were strongly moved by the drowning child argument. When they contemplate why they get up in the morning, they might either find themselves motivated to make happy people, or they don’t. Just like some people find the drowning child argument compelling as a reason to re-orient a lot of their lives, while others don’t. It’s the same type of “where motivation to form life goals comes from.” See also my post here, in particular the subsection on “planning mode” that describes how I believe that people decide on adopting an identity around some specific life goal. (And the primary point there is that it’s not all that different from how people make less-high-stakes decisions, such as what job to take or whether to go skiing on a weekend vs stay cozily at home.)
One underlying assumption in my thinking here is that when people say they have a confident view on population ethics because [insert some complicated philosophical argument], it’s often that the true reason they have that view*** is some fundamental intuition about some pretty straightforward thought experiment, and the theory surrounding it is more like “extra furnishing of that intuition.”
***Instead of saying “that view,” I should rather say “a view that has implications that place it in this broad family of views (e.g., ‘downside-focused’ vs not).” For people to come up with highly specific and bullet-biting views like “negative utilitarianism” or “classical hedonistic utilitarianism,” they do have to engage in a lot of abstract theoretical reasoning. However, why is someone drawn to theories that say it’s important to create happy people? I feel like you can often track this down to some location in the chain of arguments where there’s a pretty straightforward thought experiment and the person goes “this is where I stand my ground, I won’t accept that.” And people stand their ground at very different points, and sometimes you have a dilemma where someone is like “I always found the left path intuitive” and the other person is like “the left path is absolutely horrible, and I believe that more confidently than I’d believe the merits of more abstract arguments.”