I’m thrilled about this post—during my first two-three years of studying math/cs and thinking about AGI my primary concern was the rights and liberties of baby agents (but I wasn’t giving suffering nearly adequate thought). Over the years I became more of an orthodox x-risk reducer, and while the process has been full of nutritious exercises, I fully admit that becoming orthodox is a good way to win colleagues, not get shrugged off as a crank at parties, etc. and this may have played a small role, if not motivated reasoning then at least humbly deferring to people who seem like they’re thinking clearer than me.
I think this area is sufficiently undertheorized and neglected that the following is only hypothetical, but could become important: how is one to tradeoff between existential safety (for humans) and suffering risks (for all minds)?
Value is complex and fragile. There are numerous reasons to be more careful than kneejerk cosmopolitanism, and if one’s intuitions are “for all minds, of course!” it’s important to think through what steps one’d have to take to become someone who thinks safeguarding humanity is more important than ensuring good outcomes for creatures in other substrates. This was best written about, to my knowledge, in the old Value Theory sequence by Eliezer Yudkowsky and to some extent Fun Theory, while it’s not 100% satisfying I don’t think one go-to sequence is the answer, as a lot of this stuff should be left as exercise for the reader.
Is anyone worried about x-risk and s-risk signaling a future of two opposite factions of EA? That is to say, what are the odds that there’s no way for humanity-preservers and suffering-reducers to get along? You can easily imagine disagreement about how to tradeoff research resources between human existential safety and artificial welfare, but what if we had to reason about deployment? Do we deploy an AI that’s 90% safe against some alien paperclipping outcome, 30% reduction in artificial suffering; or one that’s 75% safe against paperclipping, 70% reduction in artificial suffering?
If we’re lucky, there will be a galaxy-brained research agenda or program, some holes or gaps in the theory or implementation that allows and even encourages coalitioning between humanity-preservers and suffering-reducers. I don’t think we’ll be this lucky, in the limiting case where one humanity-preserver and one suffering-reducer are each at the penultimate stages of their goals. However we shouldn’t be surprised if there is some overlap, the cooperative AI agenda comes to mind.
I find myself shocked at point #2, at the inadequacy of the state of theory of these tradeoffs. Is it premature to worry about that before the AS movement has even published a detailed agenda/proposal of how to allocate research effort grounded in today’s AI field? Much theorization is needed to even get to that point, but it might be wise to think ahead.
I look forward to reading the preprint this week, thanks
Hey, glad you liked the post! I don’t really see a tradeoff between extinction risk reduction and moral circle expansion, except insofar as we have limited time and resources to make progress on each. Maybe I’m missing something?
When it comes to limited time and resources, I’m not too worried about that at this stage. My guess is that by reaching out to new (academic) audiences, we can actually increase the total resources and community capital dedicated to longtermist topics in general. Some individuals might have tough decisions to face about where they can have the most positive impact, but that’s just in the nature of there being lots of important problems we could plausibly work on.
On the more general category of s-risks vs extinction risks, it seems to be pretty unanimous that people focused on s-risks advocate cooperation between these groups. E.g. see Tobias Baumann’s “Common ground for longtermists” and CLR’s publications on “Cooperation & Decision Theory”. I’ve seen less about this from people focused on extinction risks, but I might just not have been paying enough attention.
(Though I do think there could also be some tensions between these two areas of work beyond just the fact that each area of work draws on similar scarce resources.)
It seems to me that your comment kind-of implies that people who focus on reducing extinction risk and people who focus on reducing s-risk are mainly divided by moral views. (Maybe that’s just me mis-reading you, though.) But I think empirical views can also be very relevant.
For example, if someone who leans towards suffering-focused ethics became convinced that s-risks are less likely, smaller scale in expectation, or harder to reduce the likelihood or scale of than they’d thought, that should probably update them somewhat away from prioritising s-risk reduction, leaving more room for prioritising extinction risk reduction. Likewise, if someone who was prioritising extinction risk reduction came to believe extinction was less likely or harder to change the likelihood of than they’d thought, that should update them somewhat away from prioritising extinction risk reduction.
So one way to address the questions, tradeoffs, and potential divisions you mention is simply to engage in further research and debate on empirical questions relevant to the importance, tractability, and neglectedness of extinction risk reduction, s-risk reduction, and other potential longtermist priorities.
how is one to tradeoff between existential safety (for humans) and suffering risks (for all minds) [...] what are the odds that there’s no way for humanity-preservers and suffering-reducers to get along?
It seems that what you have in mind is tradeoffs between extinction risk reduction vs suffering risk reduction. I say this because existential risk itself include a substantial portion of possible suffering risks, and isn’t just about preserving humanity. (See Venn diagrams of existential, global, and suffering catastrophes.)
I also think it would be best to separate out the question of which types of beings to focus on (e.g., humans, nonhuman animals, artificial sentient beings…) from the question of how much to focus on reducing suffering in those beings vs achieving other possible moral goals (e.g., increasing happiness, increasing freedom, creating art).
(There are also many other distinctions one could make, such as between affecting the lives of beings that already exist vs changing whether beings come to exist in future.)
I’m thrilled about this post—during my first two-three years of studying math/cs and thinking about AGI my primary concern was the rights and liberties of baby agents (but I wasn’t giving suffering nearly adequate thought). Over the years I became more of an orthodox x-risk reducer, and while the process has been full of nutritious exercises, I fully admit that becoming orthodox is a good way to win colleagues, not get shrugged off as a crank at parties, etc. and this may have played a small role, if not motivated reasoning then at least humbly deferring to people who seem like they’re thinking clearer than me.
I think this area is sufficiently undertheorized and neglected that the following is only hypothetical, but could become important: how is one to tradeoff between existential safety (for humans) and suffering risks (for all minds)?
Value is complex and fragile. There are numerous reasons to be more careful than kneejerk cosmopolitanism, and if one’s intuitions are “for all minds, of course!” it’s important to think through what steps one’d have to take to become someone who thinks safeguarding humanity is more important than ensuring good outcomes for creatures in other substrates. This was best written about, to my knowledge, in the old Value Theory sequence by Eliezer Yudkowsky and to some extent Fun Theory, while it’s not 100% satisfying I don’t think one go-to sequence is the answer, as a lot of this stuff should be left as exercise for the reader.
Is anyone worried about x-risk and s-risk signaling a future of two opposite factions of EA? That is to say, what are the odds that there’s no way for humanity-preservers and suffering-reducers to get along? You can easily imagine disagreement about how to tradeoff research resources between human existential safety and artificial welfare, but what if we had to reason about deployment? Do we deploy an AI that’s 90% safe against some alien paperclipping outcome, 30% reduction in artificial suffering; or one that’s 75% safe against paperclipping, 70% reduction in artificial suffering?
If we’re lucky, there will be a galaxy-brained research agenda or program, some holes or gaps in the theory or implementation that allows and even encourages coalitioning between humanity-preservers and suffering-reducers. I don’t think we’ll be this lucky, in the limiting case where one humanity-preserver and one suffering-reducer are each at the penultimate stages of their goals. However we shouldn’t be surprised if there is some overlap, the cooperative AI agenda comes to mind.
I find myself shocked at point #2, at the inadequacy of the state of theory of these tradeoffs. Is it premature to worry about that before the AS movement has even published a detailed agenda/proposal of how to allocate research effort grounded in today’s AI field? Much theorization is needed to even get to that point, but it might be wise to think ahead.
I look forward to reading the preprint this week, thanks
Hey, glad you liked the post! I don’t really see a tradeoff between extinction risk reduction and moral circle expansion, except insofar as we have limited time and resources to make progress on each. Maybe I’m missing something?
When it comes to limited time and resources, I’m not too worried about that at this stage. My guess is that by reaching out to new (academic) audiences, we can actually increase the total resources and community capital dedicated to longtermist topics in general. Some individuals might have tough decisions to face about where they can have the most positive impact, but that’s just in the nature of there being lots of important problems we could plausibly work on.
On the more general category of s-risks vs extinction risks, it seems to be pretty unanimous that people focused on s-risks advocate cooperation between these groups. E.g. see Tobias Baumann’s “Common ground for longtermists” and CLR’s publications on “Cooperation & Decision Theory”. I’ve seen less about this from people focused on extinction risks, but I might just not have been paying enough attention.
Thanks for this post and this comment.
I agree that some work on extinction risk reduction may actually boost work on moral circle expansion, and vice versa. I also think there are some possible mechanisms for that beyond those you mentioned. I previously discussed similar points in my post Extinction risk reduction and moral circle expansion: Speculating suspicious convergence.
(Though I do think there could also be some tensions between these two areas of work beyond just the fact that each area of work draws on similar scarce resources.)
It seems to me that your comment kind-of implies that people who focus on reducing extinction risk and people who focus on reducing s-risk are mainly divided by moral views. (Maybe that’s just me mis-reading you, though.) But I think empirical views can also be very relevant.
For example, if someone who leans towards suffering-focused ethics became convinced that s-risks are less likely, smaller scale in expectation, or harder to reduce the likelihood or scale of than they’d thought, that should probably update them somewhat away from prioritising s-risk reduction, leaving more room for prioritising extinction risk reduction. Likewise, if someone who was prioritising extinction risk reduction came to believe extinction was less likely or harder to change the likelihood of than they’d thought, that should update them somewhat away from prioritising extinction risk reduction.
So one way to address the questions, tradeoffs, and potential divisions you mention is simply to engage in further research and debate on empirical questions relevant to the importance, tractability, and neglectedness of extinction risk reduction, s-risk reduction, and other potential longtermist priorities.
The following post also contains some relevant questions and links to relevant sources: Crucial questions for longtermists.
It seems that what you have in mind is tradeoffs between extinction risk reduction vs suffering risk reduction. I say this because existential risk itself include a substantial portion of possible suffering risks, and isn’t just about preserving humanity. (See Venn diagrams of existential, global, and suffering catastrophes.)
I also think it would be best to separate out the question of which types of beings to focus on (e.g., humans, nonhuman animals, artificial sentient beings…) from the question of how much to focus on reducing suffering in those beings vs achieving other possible moral goals (e.g., increasing happiness, increasing freedom, creating art).
(There are also many other distinctions one could make, such as between affecting the lives of beings that already exist vs changing whether beings come to exist in future.)