Oliver: … Thus we see that donating to the opera is the best way of promoting the arts.
Eleanor: Okay, but I’m principally interested in improving human welfare.
Oliver: Oh! Well I think it is also the case that donating to the opera is best for improving human welfare too.
Generally, what is best for one thing is usually not the best for something else, and thus Oliver’s claim that donations to opera are best for the arts and human welfare is surprising. We may suspect bias: that Oliver’s claim that the Opera is best for the human welfare is primarily motivated by his enthusiasm for opera and desire to find reasons in favour, rather than a cooler, more objective search for what is really best for human welfare.
I think this is a very valid point. Furthermore, it’s the main reason the contents of the present post probably shouldn’t change your beliefs very much.
But if I was to speculate some surprising and suspicious convergence between what’s best for two quite different goals, it might go a little something like this...
Background
Moral circle expansion (MCE) essentially refers to influencing people to extend moral concern to additional types of entities, such as nonhuman animals. This is plausibly among the most valuable categories of interventions from a (near-term) animal welfare perspective. Some (e.g., Reese) have argued that MCE is also among the most valuable categories of interventions from a longtermist perspective, and perhaps more valuable than extinction risk reduction.
In response, some have argued that this is a claim of surprising and suspicious convergence. Some further argue that this is made especially suspicious by the fact that some of the claimants were already interested in MCE or (near-term) animal welfare before they learned of or became interested in longtermism.[1]
Personally, I see merit both in those skeptical arguments and in further work on MCE.
But here I’d like to speculate some fresh convergences; some surprising and suspicious arguments why, if what you really care about is “moral circle expansion”, you might want to do work that looks like “extinction risk reduction”, or vice versa. For example, let’s say you’re mostly concerned about the quality of the long-term future (rather than whether humanity survives to experience that future), and you see the size of our moral circles as key to that. If so, the first two of these arguments might push in favour of you working “directly” on extinction risk reduction for the sake of its “indirect” MCE benefits.[2]
I think that these arguments should play a smaller role in decision-making than various other considerations (e.g., population ethics, the likely quality and size of the future, personal fit; see also Crucial questions for longtermists). But I think that these arguments may deserve some attention.
(Suspicious) arguments for working on extinction risk reduction
Argument 1: Work on extinction risks is a concrete project primarily premised on, and making salient, the moral value of future generations.[3] The general public typically think and care relatively little about future generations. Additionally, people discussing extinction risk reduction often highlight the importance of ensuring the existence and thriving of not only future humans, but also whatever sentient beings we end up as (e.g., digital minds, a species we evolve into). It seems plausible (though speculative) that this expands people’s moral circles to include the beings focused on in these discussions (future humans, digital minds, etc.). It may also expand people’s moral circles more generally.
At least at first glance, this argument seems similarly plausible to the argument that it makes sense to work on present-day animal welfare in order to secure broader, long-term moral circle expansion (e.g., to include digital minds). And that argument is common among proponents of MCE (e.g., the Sentience Institute; see also).[4]
This also seems similar to the occasionally made claim that work on, or concern for, climate change has increased thought about and concern for future generations more generally (see e.g. Lewis).
Argument 2: Many extinction risk reduction activities could also happen to reduce the chance that humanity (or its descendants) “locks in” a set of values or goals before we undertake some amount of something like a “long reflection”. That could mean that there’s more time in which moral circles can “naturally” expand, or in which people can actively push for MCE. That could in turn mean that, in the long-run, our moral circles will end up closer to the appropriate size. (See also.)
An example of an extinction risk reduction activity that might fit this bill is promoting cautious and safe AI development.
This is quite similar to standard arguments about existential or extinction risk reduction being robustly valuable because it could help us keep our options open, and allow us to act on whatever we later decide or realise is valuable. But this speculative argument is not quite identical to those argument; it adds something on top of them. One reason for this is that there could be cases of value/goal “lock-in” that are either not bad enough or not irreversible enough to count as existential catastrophes, but which still leave the future worse in expectation. Many extinction risk reduction efforts also happen to reduce the risk of such cases, providing more time for MCE.
(Suspicious) arguments for working on MCE
Argument A: MCE could expand concern to future generations, digital minds, future nonhuman animals on the Earth or other planets (see also), etc. This could all increase the apparent stakes involved in extinction risk reduction, because it could make people realise that we’re able to create even more value than they thought (e.g., because they realise that the future could be full of huge numbers of beings who matter). As a result, this could increase the attention and resources devoted towards extinction risk reduction.
This overlaps considerably with the idea that promoting longtermism is a good “indirect” or “meta” strategy for reducing extinction risk. The key distinction is that promoting longtermism may tend to expand concern only to future generations of humans, rather than also to other relevant groups (e.g., digital minds). (That said, there’s no reason longtermism has to be human-centric.)
Argument B: Let’s assume that the moral circles of most people (including various “key people”, such as AI designers) are currently “smaller” than they “should” be. If so, MCE might increase the expected value of the future conditional on us avoiding extinction. This is because it may reduce the chances of other types of existential catastrophes (such as unrecoverable dystopias), or of futures that are somewhat, or theoretically reversibly, suboptimal.
If MCE increases the expected value of the future conditional on us avoiding extinction, then MCE might serve as a “complementary good” to extinction risk reduction; it might make each unit of extinction risk reduction more valuable, and increase demand for extinction risk reduction. This could perhaps increase the attention and resources devoted towards extinction risk reduction.
Personal conclusion
Personally, this updates me slightly towards further valuing both extinction risk reduction and MCE. This is because it weakly suggests additional benefits of both of those categories of interventions. This effect is slightly stronger for extinction risk reduction, as Argument 1 seems to me slightly less speculative and suspicious than the other three arguments I gave.
But these are small updates, because:
various other considerations (e.g., population ethics) seem more important
I suspect I could come up with arguments of similar style and strength for various other categories of interventions, if I made an active effort to do so.
In any case, I currently favour existential risk reduction over either extinction risk reduction or MCE.
Some things this post didn’t cover
Various other considerations that could inform choices between categories of longtermist interventions
Which specific extinction risk reduction interventions are best, both in general and in relation to their indirect MCE benefits?
Which specific MCE interventions are best, both in general and in relation to their indirect extinction risk reduction benefits?
E.g., perhaps explicit advocacy of longtermism or consideration of future generations benefits extinction risk reduction more than corporate campaigns to improve animal welfare (see also).
I’m grateful to Justin Shovelain for comments and suggestions on a draft of this post. This does not imply his endorsement of all of this post’s claims.
This post does not necessarily represent the views of any of my employers.
I’m fairly confident I’ve seen or heard these sorts of arguments several times, though I can’t recall where.
In Lewis’ post, he makes related points (though without referring explicitly to MCE). For example, he writes:
In sketch, one first points to some benefits the prior commitment has by the lights of the new consideration (e.g. promoting animal welfare promotes antispeciesism, which is likely to make the far future trajectory go better), and second remarks about how speculative searching directly on the new consideration is (e.g. it is very hard to work out what we can do now which will benefit the far future).(6)
That the argument tends to end here is suggestive of motivated stopping. For although the object level benefits of (say) global poverty are not speculative, their putative flow-through benefits on the far future are speculative.
And Jacy Reese makes related points when discussing reasons why certain people may be biased towards MCE.
I don’t actually know what proportion of the people claiming MCE should be a top priority from a longtermist perspective already thought MCE (or near-term animal welfare) should be a top altruistic priority before they learned of or became interested in longtermism.
Note that I’m talking about extinction risk reduction, not existential risk reduction. This is partly because it’s easier to distinguish MCE work from extinction risk reduction work than it is to distinguish it from existential risk reduction work. This, in turn, is due to the fact that some existential catastrophes could follow fairly directly from the failure of humanity’s moral circle (or particular people’s moral circles) to encompass entities that really should’ve been encompassed (see also Reese).
Just in case this isn’t clear, this genuinely isn’t meant as a veiled critique of proponents of MCE. And I don’t see the argument I’m making as a compelling reason why people who care about MCE should work on extinction risk reduction, just as one possible consideration.
Extinction risk reduction and moral circle expansion: Speculating suspicious convergence
Greg Lewis opens his thought-provoking post Beware surprising and suspicious convergence with the following statements:
I think this is a very valid point. Furthermore, it’s the main reason the contents of the present post probably shouldn’t change your beliefs very much.
But if I was to speculate some surprising and suspicious convergence between what’s best for two quite different goals, it might go a little something like this...
Background
Moral circle expansion (MCE) essentially refers to influencing people to extend moral concern to additional types of entities, such as nonhuman animals. This is plausibly among the most valuable categories of interventions from a (near-term) animal welfare perspective. Some (e.g., Reese) have argued that MCE is also among the most valuable categories of interventions from a longtermist perspective, and perhaps more valuable than extinction risk reduction.
In response, some have argued that this is a claim of surprising and suspicious convergence. Some further argue that this is made especially suspicious by the fact that some of the claimants were already interested in MCE or (near-term) animal welfare before they learned of or became interested in longtermism.[1]
Personally, I see merit both in those skeptical arguments and in further work on MCE.
But here I’d like to speculate some fresh convergences; some surprising and suspicious arguments why, if what you really care about is “moral circle expansion”, you might want to do work that looks like “extinction risk reduction”, or vice versa. For example, let’s say you’re mostly concerned about the quality of the long-term future (rather than whether humanity survives to experience that future), and you see the size of our moral circles as key to that. If so, the first two of these arguments might push in favour of you working “directly” on extinction risk reduction for the sake of its “indirect” MCE benefits.[2]
I think that these arguments should play a smaller role in decision-making than various other considerations (e.g., population ethics, the likely quality and size of the future, personal fit; see also Crucial questions for longtermists). But I think that these arguments may deserve some attention.
(Suspicious) arguments for working on extinction risk reduction
Argument 1: Work on extinction risks is a concrete project primarily premised on, and making salient, the moral value of future generations.[3] The general public typically think and care relatively little about future generations. Additionally, people discussing extinction risk reduction often highlight the importance of ensuring the existence and thriving of not only future humans, but also whatever sentient beings we end up as (e.g., digital minds, a species we evolve into). It seems plausible (though speculative) that this expands people’s moral circles to include the beings focused on in these discussions (future humans, digital minds, etc.). It may also expand people’s moral circles more generally.
At least at first glance, this argument seems similarly plausible to the argument that it makes sense to work on present-day animal welfare in order to secure broader, long-term moral circle expansion (e.g., to include digital minds). And that argument is common among proponents of MCE (e.g., the Sentience Institute; see also).[4]
This also seems similar to the occasionally made claim that work on, or concern for, climate change has increased thought about and concern for future generations more generally (see e.g. Lewis).
Argument 2: Many extinction risk reduction activities could also happen to reduce the chance that humanity (or its descendants) “locks in” a set of values or goals before we undertake some amount of something like a “long reflection”. That could mean that there’s more time in which moral circles can “naturally” expand, or in which people can actively push for MCE. That could in turn mean that, in the long-run, our moral circles will end up closer to the appropriate size. (See also.)
An example of an extinction risk reduction activity that might fit this bill is promoting cautious and safe AI development.
This is quite similar to standard arguments about existential or extinction risk reduction being robustly valuable because it could help us keep our options open, and allow us to act on whatever we later decide or realise is valuable. But this speculative argument is not quite identical to those argument; it adds something on top of them. One reason for this is that there could be cases of value/goal “lock-in” that are either not bad enough or not irreversible enough to count as existential catastrophes, but which still leave the future worse in expectation. Many extinction risk reduction efforts also happen to reduce the risk of such cases, providing more time for MCE.
(Suspicious) arguments for working on MCE
Argument A: MCE could expand concern to future generations, digital minds, future nonhuman animals on the Earth or other planets (see also), etc. This could all increase the apparent stakes involved in extinction risk reduction, because it could make people realise that we’re able to create even more value than they thought (e.g., because they realise that the future could be full of huge numbers of beings who matter). As a result, this could increase the attention and resources devoted towards extinction risk reduction.
This overlaps considerably with the idea that promoting longtermism is a good “indirect” or “meta” strategy for reducing extinction risk. The key distinction is that promoting longtermism may tend to expand concern only to future generations of humans, rather than also to other relevant groups (e.g., digital minds). (That said, there’s no reason longtermism has to be human-centric.)
Argument B: Let’s assume that the moral circles of most people (including various “key people”, such as AI designers) are currently “smaller” than they “should” be. If so, MCE might increase the expected value of the future conditional on us avoiding extinction. This is because it may reduce the chances of other types of existential catastrophes (such as unrecoverable dystopias), or of futures that are somewhat, or theoretically reversibly, suboptimal.
If MCE increases the expected value of the future conditional on us avoiding extinction, then MCE might serve as a “complementary good” to extinction risk reduction; it might make each unit of extinction risk reduction more valuable, and increase demand for extinction risk reduction. This could perhaps increase the attention and resources devoted towards extinction risk reduction.
Personal conclusion
Personally, this updates me slightly towards further valuing both extinction risk reduction and MCE. This is because it weakly suggests additional benefits of both of those categories of interventions. This effect is slightly stronger for extinction risk reduction, as Argument 1 seems to me slightly less speculative and suspicious than the other three arguments I gave.
But these are small updates, because:
various other considerations (e.g., population ethics) seem more important
I suspect I could come up with arguments of similar style and strength for various other categories of interventions, if I made an active effort to do so.
In any case, I currently favour existential risk reduction over either extinction risk reduction or MCE.
Some things this post didn’t cover
Various other considerations that could inform choices between categories of longtermist interventions
Which specific extinction risk reduction interventions are best, both in general and in relation to their indirect MCE benefits?
Which specific MCE interventions are best, both in general and in relation to their indirect extinction risk reduction benefits?
E.g., perhaps explicit advocacy of longtermism or consideration of future generations benefits extinction risk reduction more than corporate campaigns to improve animal welfare (see also).
For thoughts and links relevant to those questions, see Crucial questions for longtermists.
I’m grateful to Justin Shovelain for comments and suggestions on a draft of this post. This does not imply his endorsement of all of this post’s claims.
This post does not necessarily represent the views of any of my employers.
I’m fairly confident I’ve seen or heard these sorts of arguments several times, though I can’t recall where.
In Lewis’ post, he makes related points (though without referring explicitly to MCE). For example, he writes:
And Jacy Reese makes related points when discussing reasons why certain people may be biased towards MCE.
I don’t actually know what proportion of the people claiming MCE should be a top priority from a longtermist perspective already thought MCE (or near-term animal welfare) should be a top altruistic priority before they learned of or became interested in longtermism.
Note that I’m talking about extinction risk reduction, not existential risk reduction. This is partly because it’s easier to distinguish MCE work from extinction risk reduction work than it is to distinguish it from existential risk reduction work. This, in turn, is due to the fact that some existential catastrophes could follow fairly directly from the failure of humanity’s moral circle (or particular people’s moral circles) to encompass entities that really should’ve been encompassed (see also Reese).
That said, extinction risk reduction, like existential risk reduction, can also be motivated by consideration of the past, the present, virtue, and cosmic significance (The Precipice, Chapter 2). See also the person-affecting value of existential risk reduction.
Just in case this isn’t clear, this genuinely isn’t meant as a veiled critique of proponents of MCE. And I don’t see the argument I’m making as a compelling reason why people who care about MCE should work on extinction risk reduction, just as one possible consideration.