I would like someone write a post expanding on X-risks to all life v. to humans. Despite the importance of this consideration, it seems to have been almost completely neglected in EA discussion.
If I were to write on this, I’d reframe the issue somewhat differently than the author does in that post. Instead of a dichotomy between two types of risks, one could see it as a gradation of risks that push things back an increasing number of possible great filters. Risks to all life and risks to humans would then be two specific instances of this more general phenomenon.
Nice point, Pablo! I did not know about the post you linked, but I had noted it was not mentioned (at least very clearly) in Hilary Greaves’ working paper Concepts of existential catastrophe[1] (at least in the version of September 2023)
Despite the importance of this consideration, it seems to have been almost completely neglected in EA discussion.
I guess most people sympathetic to existential risk reduction think the extinction risk from AI is much higher than those from other risks (as I do). In addition, existential risk as a fraction of extinction risk is arguably way higher for AI than other risks, so the consideration you mentioned will tend to make AI existential risk even more pressing? If so, people may be more interested in either tackling AI risk, or assessing its interactions with other risks.
A related seemingly underexplored question is determining under which conditions human disempowerment (including extinction) would be bad from an impartial perspective. Humans have arguably played a role in the extinction of many species, including maybe some of the genus Homo (there are 13!), but that was not an existential risk given humans are thought to be better steerers of the future. The same might apply to AI under some conditions. Matthew Barnett has a quick take somewhat related to this. Here is the 1st paragraph:
I’m curious why there hasn’t been more work exploring a pro-AI or pro-AI-acceleration position from an effective altruist perspective. Some points:
Unlike existential risk from other sources (e.g. an asteroid) AI x-risk is unique because humans would be replaced by other beings, rather than completely dying out. This means you can’t simply apply a naive argument that AI threatens total extinction of value to make the case that AI safety is astronomically important, in the sense that you can for other x-risks. You generally need additional assumptions.
Total utilitarianism is generally seen as non-speciesist, and therefore has no intrinsic preference for human values over unaligned AI values. If AIs are conscious, there don’t appear to be strong prima facie reasons for preferring humans to AIs under hedonistic utilitarianism. Under preference utilitarianism, it doesn’t necessarily matter whether AIs are conscious.
Total utilitarianism generally recommends large population sizes. Accelerating AI can be modeled as a kind of “population accelerationism”. Extremely large AI populations could be preferable under utilitarianism compared to small human populations, even those with high per-capita incomes. Indeed, humans populations have recently stagnated via low population growth rates, and AI promises to lift this bottleneck.
Therefore, AI accelerationism seems straightforwardly recommended by total utilitarianism under some plausible theories.
I guess most people sympathetic to existential risk reduction think the extinction risk from AI is much higher that those from other risks (as I do). In addition, existential risk as a fraction of extinction risk is arguably way higher for AI than other risks, so the consideration you mentioned will tend to make AI existential risk even more pressing? If so, people may be more interested in either tackling AI risk, or assessing its interactions with other risks.
Yes, this seems right.
As a semi-tangential observation: your comment made me better appreciate an ambiguity in the concept of importance. When I said that this was an important consideration, I meant that it could cause us to significantly revise our estimates of impact. But by ‘important consideration’ one could also mean a consideration that could cause us to significantly alter our priorities.[1] “X-risks to all life v. to humans” may be important in the first sense but not in the second sense.
I would like someone write a post expanding on X-risks to all life v. to humans. Despite the importance of this consideration, it seems to have been almost completely neglected in EA discussion.
If I were to write on this, I’d reframe the issue somewhat differently than the author does in that post. Instead of a dichotomy between two types of risks, one could see it as a gradation of risks that push things back an increasing number of possible great filters. Risks to all life and risks to humans would then be two specific instances of this more general phenomenon.
Nice point, Pablo! I did not know about the post you linked, but I had noted it was not mentioned (at least very clearly) in Hilary Greaves’ working paper Concepts of existential catastrophe[1] (at least in the version of September 2023)
I guess most people sympathetic to existential risk reduction think the extinction risk from AI is much higher than those from other risks (as I do). In addition, existential risk as a fraction of extinction risk is arguably way higher for AI than other risks, so the consideration you mentioned will tend to make AI existential risk even more pressing? If so, people may be more interested in either tackling AI risk, or assessing its interactions with other risks.
A related seemingly underexplored question is determining under which conditions human disempowerment (including extinction) would be bad from an impartial perspective. Humans have arguably played a role in the extinction of many species, including maybe some of the genus Homo (there are 13!), but that was not an existential risk given humans are thought to be better steerers of the future. The same might apply to AI under some conditions. Matthew Barnett has a quick take somewhat related to this. Here is the 1st paragraph:
So I sent her an email a few days ago about this.
Yes, this seems right.
As a semi-tangential observation: your comment made me better appreciate an ambiguity in the concept of importance. When I said that this was an important consideration, I meant that it could cause us to significantly revise our estimates of impact. But by ‘important consideration’ one could also mean a consideration that could cause us to significantly alter our priorities.[1] “X-risks to all life v. to humans” may be important in the first sense but not in the second sense.
Perhaps one could distinguish between ‘axiological importance’ and ‘deontic importance’ to disambiguate these two notions.
I wrote this post asking what success for sentience looks like. There’s a good chance we humans are just another stepping stone on the path toward an even higher form of intelligence and sentience.