My biggest doubt for the value of extinction risk reduction is my (asymmetric) person-affecting intuitions: I don’t think it makes things better to ensure future people (or other moral patients) come to exist for their own sake or the sake of the value within their own lives. But if future people will exist, I want to make sure things go well for them. This is summarized by the slogan “Make people happy, not make happy people”.
If this holds, then extinction risk reduction saves the lives of people who would otherwise die in an extinction event, which is presumably good for them, but this is only billions of humans.[1] If we don’t go extinct, then the number of our descendant moral patients could be astronomical. It therefore seems better to prioritize our descendant moral patients conditional on our survival because there are far far more of them.
Aliens (including alien artificial intelligence) complicate the picture. We (our descendants, whether human, AI or otherwise) could
use the resources aliens would have otherwise for our purposes instead of theirs, i.e. replace them,
help them, or
harm them or be harmed by them, e.g. through conflict.
And it’s not clear we want to save other animals, in case their lives are bad on average. It can also make a difference whether we’re talking about human extinction only or all animal extinction.
Yes, we probably aren’t benefitting future individuals in a strict and narrow person-affecting sense much or at all.
However, there are some other person-affecting views that are concerned with differences for future moral patients:
On wide person-affecting views, if Alice would have a better life than Bob, then it’s better for Alice to come to exist than for Bob to come to exist, all else equal (the Nonidentity problem). This doesn’t imply it’s better to ensure Alice or Bob exists than neither does, though. (See Thomas, 2019 or Meacham, 2012 for examples).
On asymmetric person-affecting views, it can still be good to prevent bad lives. (This needn’t imply antinatalism, because it could be that good lives can offset bad lives (Thomas, 2019, Pummer, 2024).)
I don’t remember 100%, but I think that Thomas and Pummer might both not be arguing for or articulating an axiological theory that ranks outcomes as better or worse, but rather a non-consequentialist theory of moral obligations/oughts. For my own part, I think views like that are a lot more plausible, but the view that it doesn’t make the outcome better to create additional happy lives seems to me very hard to defend.
I think Thomas did not take a stance on whether it was axiological or deontic in the GPI working paper, and instead just described the structure of a possible view. Pummer described his as specifically deontic and not axiological.
I’m not sure what should be classified as axiological or how important the distinction is. I’m certainly giving up the independence of irrelevant alternatives, but I think I can still rank outcomes in a way that depends on the option set.
I tend to be sceptical of appeals to value as option-set dependent as a means of defending person-affecting views, for the reason that we needn’t imagine outcomes as things that someone is able to choose to bring about, as opposed to just something that happens to be the case. If you imagine the possible outcomes this way, then you can’t appeal to option-set dependence to block the various arguments, since the outcomes are not options for anyone to realize. And if, say, it makes the outcome better if an additional happy person happens to exist without anyone making it so, then it is hard to see why it should be otherwise when someone brings about that the additional happy person exists. (Compare footnote 9 in this paper/report.)
I think this bit from the footnote helped clarify, since I wasn’t sure what you meant in this comment:
Note, however, that there is no assumption that d—f are outcomes for anyone to choose, as opposed to outcomes that might arise naturally. Thus, it is not clear how the appeal to choice set dependent betterness can be used to block the argument that f is not worse than d, since there are no choice sets in play here.
I might be inclined to compare outcome distributions using the same person-affecting rules as I would for option sets, whether or not they’re being chosen by anyone. I think this can make sense on actualist person-affecting views, illustrated with my “Best in the outcome argument”s here, which is not framed in terms of choice. (The “Deliberation path argument” is framed in terms of choice.)
Then, I’d disagree with this:
And if, say, it makes the outcome better if an additional happy person happens to exist without anyone making it so
How asymmetric do you think things are? I tend to deprioritise s-risks (both accidental and intentional) because it seems like accidental suffering and intentional suffering will be a very small portion of the things that our descendants choose to do with energy. In everyday cases I don’t feel a pull to putting a lot of weight on suffering. But I feel more confused when we get to tail cases. Maximising pleasure intuitively feels meh to me, but maximising suffering sounds pretty awful. So I worry that (1) all of the value is in the tails, as per Power Laws of Value and (2) on my intuitive moral tastes the good tails are not that great and the bad tails are really bad.
I think I’m ~100% on no non-instrumental benefit from creating moral patients. Also pretty high on no non-instrumental benefit from creating new desires, preferences, values, etc. within existing moral patients. (I try to develop and explain my views in this sequence.)
I haven’t thought a lot about tradeoffs between suffering and other things, including pleasure, within moral patients that would exist anyway. I could see these tradeoffs going like they would for a classical utilitarian, if we hold an individual’s dispositions fixed.
To be clear, I’m a moral anti-realist (subjectivist), so I don’t think there’s any stance-independent fact about how asymmetric things should be.
Also, I’m curious if we can explain why you react like this:
Maximising pleasure intuitively feels meh to me, but maximising suffering sounds pretty awful
(I have not read all of your sequence.) I’m confused how being even close to 100% on something like this is appropriate, my sense is generally just that population ethics is hard, humans have somewhat weak minds in the space of possible minds, and our later post-human views on ethics might be far more subtle or quite different.
I’m a moral anti-realist (subjectivist), so I don’t think there’s an objective (stance-independent) fact of the matter. I’m just describing what I would expect to continue endorse under (idealized) reflection, which depends on my own moral intuitions. The asymmetry is one of my strongest moral intuitions, so I expect not to give it up, and if it conflicts with other intuitions of mine, I’d sooner give those up instead.
I tend to think that the arguments against any theory of the good that encodes the intuition of neutrality are extremely strong. Here’s one that I think I owe to Teru Thomas (who may have got it from Tomi Francis?).
Imagine the following outcomes, A—D, where the columns are possible people, the numbers represent the welfare of each person when they exist, and # indicates non-existence.
A 5 −2 # # B 5 # 2 2 C 5 2 2 # D 5 −2 6 #
I claim that if you think it’s neutral to make happy people, there’s a strong case that you should think that B isn’t better than A. In other words, it’s not better to prevent someone from coming to exist and enduring a life that’s not worth living if you simultaneously create two people with lives worth living. And that’s absurd. I also think it’s really hard to believe if you believe the other side of the asymmetry: that it’s bad to create people whose lives are overwhelmed by suffering.
Why is there pressure on you to accept that B isn’t better than A? Well, first off, it seems plausible that B and C are equally good, since they have the same number of people at the same welfare levels. So let’s assume this is so.
Now, if you accept utilitarianism for a fixed population, you should think that D is better than C, since all the same people exist in these outcomes, and there’s more total/average welfare. (I’m pretty sure you can support this kind of verdict on weaker assumptions if necessary.)
So let’s suppose, on this basis, that D is better than C. B and C are equally good. I assume it follows that D is better than B.
Suppose that B were better than A. Since D is better than B, it would follow that D is better than A as well. But we know this can’t be so, if it’s neutral to make happy people, because D and A differ only in the existence of an extra person who has a life worth living. The neutrality principle entails that D isn’t better than B. But it’s absurd to think that B isn’t better than A.
Arguments like this make me feel pretty confident that the intuition of neutrality is mistaken.
Hmm, I might go back and forth on whether “the good” exists, as my subjective order over each set of outcomes (or set of outcome distributions). This example seems pretty compelling against it.
However, I’m first concerned with “good to someone” or “good from a particular perspective”. Then, ethics is about doing better by and managing tradeoffs between these perspectives, including as they change (e.g. with additional perspectives created through additional moral patients). This is what my sequence is about. Whether “the good” exists is second to all of this, and doesn’t seem very important.
Now, if you accept utilitarianism for a fixed population, you should think that D is better than C
If we imagine that world C already exists, then yeah, we should try to change C into D.(Similarly, if world D already exists, we’d want to prevent changes from D to C.)
So, if either of the two worlds already exists, D>C.
Where the way you’re setting up this argument turns controversial, though, is when you suggest that “D>C” is valid in some absolute sense, as opposed to just being valid (in virtue of how it better fulfills the preferences of existing people) under the stipulation of starting out in one of the worlds (that already contains all the relevant people).
Let’s think about the case where no one exists so far, where we’re the population planners for a new planet that can either shape into C or D. (In that scenario, there’s no relevant difference between B and C, btw.) I’d argue that both options are now equally defensible because the interests of possible people are under-defined* and there are defensible personal stances on population ethics for justifying either.**
*The interests of possible people are underdefined not just because it’s open how many people we might create. In addiiton, it’s also open who we might create: Some human psychological profiles are such that when someone’s born into a happy/priviledged life, they adopt a Buddhist stance towards existence and think of themselves as not having benefitted from being born. Other psychological profiles are such that people do think of themselves as grateful and lucky for having been born. (In fact, others yet even claim that they’d consider themselves lucky/grateful if their lives consisted of nothing but torture). These varying intuitions towards existence can inspire people’s population-ethical leanings. But there’s no fact of the matter of “which intuitions are more true.” These are just difference interpretations for the same sets of facts. There’s no uniquely correct way to approach population ethics.
**Namely, C is better on anti-natalist harm reduction grounds (at least depending on how we interpret the scale/negative numbers on the scale), whereas D is better on totalist grounds.
All of that was assuming that C or D are the only options. If we add a third alternative, say, “create no one,” the ranking between C and D (previously they were equally defensible) can change.
At this point, the moral realist proponents of an objective “theory of the good” might shriek in agony and think I have gone mad. But hear me out. It’s not crazy at all to think that choices depend on the alternatives we have available. If we also get the option, “create no one,” then I’d say C becomes worse than the two other options because there’s no approach to population ethics according to which C is optimal from the three options. My person-affecting stance on population ethics says that we’re free to do a bunch of things, but the one thing we cannot do is do things that reflect a negligent disregard for the interests of potential people/beings.
Why? Essentially for similar reasons why common-sense morality says that struggling lower-class families are permitted to have children that they raise under harship with little means (assuming their lives are still worth living in expectation), but if a millionaire were to do the same to their child, he’d be an asshole. The fact that the millionaire has the option “give your child enough resources to have a high chance at high happiness” makes it worse if he then proceeds to give the child hardly any resources and care at all. If you have the option to make your children really well off, but you decide not to do that, you’re not taking into consideration the interests of your child, which is bad. (Of course, if the millionaire donates all their money to effective causes and then raises a child in relative poverty, that’s acceptable again.)
I think where the proponents of an objective theory of the good get confused is this idea that you keep track, on the same objective scoreboard, no matter whether it concerns existing people or potential people. But those are not commensurate perspectives. This whole idea of an “objective axiology/theory of the good” is dubious to me. It also has pretty counterintuitive implications, to try to squeeze it all under one umbrella. As I wrote elsewhere:
There’s a tension between the beliefs “there’s an objective axiology” and “people are free to choose their life goals.”
Many effective altruists hesitate to say, “One of you must be wrong!” when one person cares greatly about living forever while the other doesn’t. By contrast, when two people disagree on population ethics “One of you must be wrong!” seems to be the standard (implicit) opinion. I think these two attitudes are in tension. To the degree people are confident that life goals are up to the individual to decide/pursue, I suggest they lean in on this belief. I expect that resolving the tension in that way – leaning in on the belief “people are free to choose their life goals;” giving up on “there’s an axiology that applies to everyone” – makes my framework more intuitive and gives a better sense of what the framework is for, what it’s trying to accomplish.
Here’s a framework for doing population ethics without an objective axiology. In this framework, person-affecting views seem quite intuitive because we can motivate them as follows:
“Preference utilitarianism for existing (and sure-to-exist) people/beings, but subject to also giving some consideration—in ways that aren’t highly demanding—to the (underdefined) interests of possible people/beings.
That’s a common approach in situations where there are two possible things to value and care about, but someone primilarly chooses one of them, as opposed to crafting a theory that unifies both of these things and stipulates tradeoffs for all situations. For instance, on the topic of “do you care about yourself or everyone?” a self-oriented individual will choose to mostly benefit themselves, but they might feel like they should take low-demanding effective altruist actions.
So, when figuring out how to do “systematized altruism,” someone can decide that for their interpretation of the concept (note that there is no uniquely correct answer here!) “systematized altruism” wants to incorporate the slogan Michael mentioned earlier, “make people happy, not make happy people.” So, the person focuses on doing what existing people want. However, when it comes to possible people/beings, instead of going “anything goes,” they still want to follow low-effort ways of doing good according to the interests of possible people/beings, at least in the sense of “don’t take actions that violate what possible people/beings could agree on as an interest group.” (And it’s not the case that they’d all want to be born, because like I said, there are psychological profiles of possible people/beings that prefer no chance of being born if there’s a chance of suffering. But maybe you could argue that if you have the chance to create happy people at no cost to yourself and no risk of suffering, it would be bad not to take that chance, since at least some possible people/beings would stake their weight into that.)
It therefore seems better to prioritize our descendant moral patients conditional on our survival because there are far far more of them.
I think in practical terms this isn’t mutually exclusive with ensuring our survival. The immediate way to secure our survival, at least for the next decade or so, is a global moratorium on ASI. This also reduces s-risks from ASI, and keeps our options open for reducing human-caused s-risk (i.e. we can still avoid factory farming in space colonization).
That seems true, but I’m not convinced it’s the best way to reduce s-risks on the margin. See, for example, Vinding, 2024.
I’d also want to see a fuller analysis of ways it could backfire. For example, a pause might make multipolar scenarios more likely by giving more groups time to build AGI, which could increase the risks of conflict-based s-risks.
a pause might make multipolar scenarios more likely by giving more groups time to build AGI
That wouldn’t really be a pause! A proper Pause (or moratorium) would include a global taboo on AGI research to the point where as few people would be doing it as are working on eugenics now (and they would be relatively easy to stop).
A pause would still give more groups more time catch up on existing research and to build infrastructure for AGI (energy, datacenters), right? Then when the pause is lifted, we could have more players at the research frontier and ready to train frontier models.
There is a key point on which I agree strongly with advocates for an AI pause: there is a massive moral urgency in ensuring that we do not end up with horrific AI-controlled outcomes. Too few people appreciate this insight, and even fewer seem to be deeply moved by it.
At the same time, I think there is a similarly massive urgency in ensuring that we do not end up with horrific human-controlled outcomes. And humanity’s current trajectory is unfortunately not all that reassuring with respect to either of these broad classes of risks …
The upshot for me is that there is a roughly equal moral urgency in avoiding each of these categories of worst-case risks
But he does not justify this equality. It seems highly likely to me that ASI-induced s-risks are on a much larger scale than human-induced ones (down to ASI being much more powerful than humanity), creating a (massive) asymmetry in favour of preventing ASI.
Starting my own discussion thread.
My biggest doubt for the value of extinction risk reduction is my (asymmetric) person-affecting intuitions: I don’t think it makes things better to ensure future people (or other moral patients) come to exist for their own sake or the sake of the value within their own lives. But if future people will exist, I want to make sure things go well for them. This is summarized by the slogan “Make people happy, not make happy people”.
If this holds, then extinction risk reduction saves the lives of people who would otherwise die in an extinction event, which is presumably good for them, but this is only billions of humans.[1] If we don’t go extinct, then the number of our descendant moral patients could be astronomical. It therefore seems better to prioritize our descendant moral patients conditional on our survival because there are far far more of them.
Aliens (including alien artificial intelligence) complicate the picture. We (our descendants, whether human, AI or otherwise) could
use the resources aliens would have otherwise for our purposes instead of theirs, i.e. replace them,
help them, or
harm them or be harmed by them, e.g. through conflict.
I’m interested in others’ takes on this.
And it’s not clear we want to save other animals, in case their lives are bad on average. It can also make a difference whether we’re talking about human extinction only or all animal extinction.
Surely any of our actions changes who exists in the future? So we aren’t in fact benefiting them?
(Whereas we can benefit specific aliens, e.g. by leaving our resources for them—our actions today don’t affect the identities of those aliens.)
Yes, we probably aren’t benefitting future individuals in a strict and narrow person-affecting sense much or at all.
However, there are some other person-affecting views that are concerned with differences for future moral patients:
On wide person-affecting views, if Alice would have a better life than Bob, then it’s better for Alice to come to exist than for Bob to come to exist, all else equal (the Nonidentity problem). This doesn’t imply it’s better to ensure Alice or Bob exists than neither does, though. (See Thomas, 2019 or Meacham, 2012 for examples).
On asymmetric person-affecting views, it can still be good to prevent bad lives. (This needn’t imply antinatalism, because it could be that good lives can offset bad lives (Thomas, 2019, Pummer, 2024).)
I don’t remember 100%, but I think that Thomas and Pummer might both not be arguing for or articulating an axiological theory that ranks outcomes as better or worse, but rather a non-consequentialist theory of moral obligations/oughts. For my own part, I think views like that are a lot more plausible, but the view that it doesn’t make the outcome better to create additional happy lives seems to me very hard to defend.
I think Thomas did not take a stance on whether it was axiological or deontic in the GPI working paper, and instead just described the structure of a possible view. Pummer described his as specifically deontic and not axiological.
I’m not sure what should be classified as axiological or how important the distinction is. I’m certainly giving up the independence of irrelevant alternatives, but I think I can still rank outcomes in a way that depends on the option set.
I tend to be sceptical of appeals to value as option-set dependent as a means of defending person-affecting views, for the reason that we needn’t imagine outcomes as things that someone is able to choose to bring about, as opposed to just something that happens to be the case. If you imagine the possible outcomes this way, then you can’t appeal to option-set dependence to block the various arguments, since the outcomes are not options for anyone to realize. And if, say, it makes the outcome better if an additional happy person happens to exist without anyone making it so, then it is hard to see why it should be otherwise when someone brings about that the additional happy person exists. (Compare footnote 9 in this paper/report.)
Nice argument, I hadn’t heard that before!
I’m pretty sure that Broome gives an argument of this kind in Weighing Lives!
Hmm, interesting.
I think this bit from the footnote helped clarify, since I wasn’t sure what you meant in this comment:
I might be inclined to compare outcome distributions using the same person-affecting rules as I would for option sets, whether or not they’re being chosen by anyone. I think this can make sense on actualist person-affecting views, illustrated with my “Best in the outcome argument”s here, which is not framed in terms of choice. (The “Deliberation path argument” is framed in terms of choice.)
Then, I’d disagree with this:
How asymmetric do you think things are? I tend to deprioritise s-risks (both accidental and intentional) because it seems like accidental suffering and intentional suffering will be a very small portion of the things that our descendants choose to do with energy. In everyday cases I don’t feel a pull to putting a lot of weight on suffering. But I feel more confused when we get to tail cases. Maximising pleasure intuitively feels meh to me, but maximising suffering sounds pretty awful. So I worry that (1) all of the value is in the tails, as per Power Laws of Value and (2) on my intuitive moral tastes the good tails are not that great and the bad tails are really bad.
I think I’m ~100% on no non-instrumental benefit from creating moral patients. Also pretty high on no non-instrumental benefit from creating new desires, preferences, values, etc. within existing moral patients. (I try to develop and explain my views in this sequence.)
I haven’t thought a lot about tradeoffs between suffering and other things, including pleasure, within moral patients that would exist anyway. I could see these tradeoffs going like they would for a classical utilitarian, if we hold an individual’s dispositions fixed.
To be clear, I’m a moral anti-realist (subjectivist), so I don’t think there’s any stance-independent fact about how asymmetric things should be.
Also, I’m curious if we can explain why you react like this:
Some ideas: Complexity of value but not disvalue or the urgency of suffering is explained by the intensity of desire, not unpleasantness? Do you have any ideas.
(I have not read all of your sequence.) I’m confused how being even close to 100% on something like this is appropriate, my sense is generally just that population ethics is hard, humans have somewhat weak minds in the space of possible minds, and our later post-human views on ethics might be far more subtle or quite different.
I’m a moral anti-realist (subjectivist), so I don’t think there’s an objective (stance-independent) fact of the matter. I’m just describing what I would expect to continue endorse under (idealized) reflection, which depends on my own moral intuitions. The asymmetry is one of my strongest moral intuitions, so I expect not to give it up, and if it conflicts with other intuitions of mine, I’d sooner give those up instead.
I tend to think that the arguments against any theory of the good that encodes the intuition of neutrality are extremely strong. Here’s one that I think I owe to Teru Thomas (who may have got it from Tomi Francis?).
Imagine the following outcomes, A—D, where the columns are possible people, the numbers represent the welfare of each person when they exist, and # indicates non-existence.
A 5 −2 # #
B 5 # 2 2
C 5 2 2 #
D 5 −2 6 #
I claim that if you think it’s neutral to make happy people, there’s a strong case that you should think that B isn’t better than A. In other words, it’s not better to prevent someone from coming to exist and enduring a life that’s not worth living if you simultaneously create two people with lives worth living. And that’s absurd. I also think it’s really hard to believe if you believe the other side of the asymmetry: that it’s bad to create people whose lives are overwhelmed by suffering.
Why is there pressure on you to accept that B isn’t better than A? Well, first off, it seems plausible that B and C are equally good, since they have the same number of people at the same welfare levels. So let’s assume this is so.
Now, if you accept utilitarianism for a fixed population, you should think that D is better than C, since all the same people exist in these outcomes, and there’s more total/average welfare. (I’m pretty sure you can support this kind of verdict on weaker assumptions if necessary.)
So let’s suppose, on this basis, that D is better than C. B and C are equally good. I assume it follows that D is better than B.
Suppose that B were better than A. Since D is better than B, it would follow that D is better than A as well. But we know this can’t be so, if it’s neutral to make happy people, because D and A differ only in the existence of an extra person who has a life worth living. The neutrality principle entails that D isn’t better than B. But it’s absurd to think that B isn’t better than A.
Arguments like this make me feel pretty confident that the intuition of neutrality is mistaken.
Hmm, I might go back and forth on whether “the good” exists, as my subjective order over each set of outcomes (or set of outcome distributions). This example seems pretty compelling against it.
However, I’m first concerned with “good to someone” or “good from a particular perspective”. Then, ethics is about doing better by and managing tradeoffs between these perspectives, including as they change (e.g. with additional perspectives created through additional moral patients). This is what my sequence is about. Whether “the good” exists is second to all of this, and doesn’t seem very important.
If we imagine that world C already exists, then yeah, we should try to change C into D.(Similarly, if world D already exists, we’d want to prevent changes from D to C.)
So, if either of the two worlds already exists, D>C.
Where the way you’re setting up this argument turns controversial, though, is when you suggest that “D>C” is valid in some absolute sense, as opposed to just being valid (in virtue of how it better fulfills the preferences of existing people) under the stipulation of starting out in one of the worlds (that already contains all the relevant people).
Let’s think about the case where no one exists so far, where we’re the population planners for a new planet that can either shape into C or D. (In that scenario, there’s no relevant difference between B and C, btw.) I’d argue that both options are now equally defensible because the interests of possible people are under-defined* and there are defensible personal stances on population ethics for justifying either.**
*The interests of possible people are underdefined not just because it’s open how many people we might create. In addiiton, it’s also open who we might create: Some human psychological profiles are such that when someone’s born into a happy/priviledged life, they adopt a Buddhist stance towards existence and think of themselves as not having benefitted from being born. Other psychological profiles are such that people do think of themselves as grateful and lucky for having been born. (In fact, others yet even claim that they’d consider themselves lucky/grateful if their lives consisted of nothing but torture). These varying intuitions towards existence can inspire people’s population-ethical leanings. But there’s no fact of the matter of “which intuitions are more true.” These are just difference interpretations for the same sets of facts. There’s no uniquely correct way to approach population ethics.
**Namely, C is better on anti-natalist harm reduction grounds (at least depending on how we interpret the scale/negative numbers on the scale), whereas D is better on totalist grounds.
All of that was assuming that C or D are the only options. If we add a third alternative, say, “create no one,” the ranking between C and D (previously they were equally defensible) can change.
At this point, the moral realist proponents of an objective “theory of the good” might shriek in agony and think I have gone mad. But hear me out. It’s not crazy at all to think that choices depend on the alternatives we have available. If we also get the option, “create no one,” then I’d say C becomes worse than the two other options because there’s no approach to population ethics according to which C is optimal from the three options. My person-affecting stance on population ethics says that we’re free to do a bunch of things, but the one thing we cannot do is do things that reflect a negligent disregard for the interests of potential people/beings.
Why? Essentially for similar reasons why common-sense morality says that struggling lower-class families are permitted to have children that they raise under harship with little means (assuming their lives are still worth living in expectation), but if a millionaire were to do the same to their child, he’d be an asshole. The fact that the millionaire has the option “give your child enough resources to have a high chance at high happiness” makes it worse if he then proceeds to give the child hardly any resources and care at all. If you have the option to make your children really well off, but you decide not to do that, you’re not taking into consideration the interests of your child, which is bad. (Of course, if the millionaire donates all their money to effective causes and then raises a child in relative poverty, that’s acceptable again.)
I think where the proponents of an objective theory of the good get confused is this idea that you keep track, on the same objective scoreboard, no matter whether it concerns existing people or potential people. But those are not commensurate perspectives. This whole idea of an “objective axiology/theory of the good” is dubious to me. It also has pretty counterintuitive implications, to try to squeeze it all under one umbrella. As I wrote elsewhere:
Here’s a framework for doing population ethics without an objective axiology. In this framework, person-affecting views seem quite intuitive because we can motivate them as follows:
“Preference utilitarianism for existing (and sure-to-exist) people/beings, but subject to also giving some consideration—in ways that aren’t highly demanding—to the (underdefined) interests of possible people/beings.
That’s a common approach in situations where there are two possible things to value and care about, but someone primilarly chooses one of them, as opposed to crafting a theory that unifies both of these things and stipulates tradeoffs for all situations. For instance, on the topic of “do you care about yourself or everyone?” a self-oriented individual will choose to mostly benefit themselves, but they might feel like they should take low-demanding effective altruist actions.
So, when figuring out how to do “systematized altruism,” someone can decide that for their interpretation of the concept (note that there is no uniquely correct answer here!) “systematized altruism” wants to incorporate the slogan Michael mentioned earlier, “make people happy, not make happy people.” So, the person focuses on doing what existing people want. However, when it comes to possible people/beings, instead of going “anything goes,” they still want to follow low-effort ways of doing good according to the interests of possible people/beings, at least in the sense of “don’t take actions that violate what possible people/beings could agree on as an interest group.” (And it’s not the case that they’d all want to be born, because like I said, there are psychological profiles of possible people/beings that prefer no chance of being born if there’s a chance of suffering. But maybe you could argue that if you have the chance to create happy people at no cost to yourself and no risk of suffering, it would be bad not to take that chance, since at least some possible people/beings would stake their weight into that.)
I’ll get back to you on this, since I think this will take me longer to answer and can get pretty technical.
Worth pointing out that extinction by almost any avenue we’re discussing seriously would kill a lot of people who already exist.
I think in practical terms this isn’t mutually exclusive with ensuring our survival. The immediate way to secure our survival, at least for the next decade or so, is a global moratorium on ASI. This also reduces s-risks from ASI, and keeps our options open for reducing human-caused s-risk (i.e. we can still avoid factory farming in space colonization).
That seems true, but I’m not convinced it’s the best way to reduce s-risks on the margin. See, for example, Vinding, 2024.
I’d also want to see a fuller analysis of ways it could backfire. For example, a pause might make multipolar scenarios more likely by giving more groups time to build AGI, which could increase the risks of conflict-based s-risks.
That wouldn’t really be a pause! A proper Pause (or moratorium) would include a global taboo on AGI research to the point where as few people would be doing it as are working on eugenics now (and they would be relatively easy to stop).
A pause would still give more groups more time catch up on existing research and to build infrastructure for AGI (energy, datacenters), right? Then when the pause is lifted, we could have more players at the research frontier and ready to train frontier models.
Any realistic Pause would not be lifted absent a global consensus on proceeding with whatever risk remains.
Vinding says:
But he does not justify this equality. It seems highly likely to me that ASI-induced s-risks are on a much larger scale than human-induced ones (down to ASI being much more powerful than humanity), creating a (massive) asymmetry in favour of preventing ASI.