Nice point, Pablo! I did not know about the post you linked, but I had noted it was not mentioned (at least very clearly) in Hilary Greavesā working paper Concepts of existential catastrophe[1] (at least in the version of September 2023)
Despite the importance of this consideration, it seems to have been almost completely neglected in EA discussion.
I guess most people sympathetic to existential risk reduction think the extinction risk from AI is much higher than those from other risks (as I do). In addition, existential risk as a fraction of extinction risk is arguably way higher for AI than other risks, so the consideration you mentioned will tend to make AI existential risk even more pressing? If so, people may be more interested in either tackling AI risk, or assessing its interactions with other risks.
A related seemingly underexplored question is determining under which conditions human disempowerment (including extinction) would be bad from an impartial perspective. Humans have arguably played a role in the extinction of many species, including maybe some of the genus Homo (there are 13!), but that was not an existential risk given humans are thought to be better steerers of the future. The same might apply to AI under some conditions. Matthew Barnett has a quick take somewhat related to this. Here is the 1st paragraph:
Iām curious why there hasnāt been more work exploring a pro-AI or pro-AI-acceleration position from an effective altruist perspective. Some points:
Unlike existential risk from other sources (e.g. an asteroid) AI x-risk is unique because humans would be replaced by other beings, rather than completely dying out. This means you canāt simply apply a naive argument that AI threatens total extinction of value to make the case that AI safety is astronomically important, in the sense that you can for other x-risks. You generally need additional assumptions.
Total utilitarianism is generally seen as non-speciesist, and therefore has no intrinsic preference for human values over unaligned AI values. If AIs are conscious, there donāt appear to be strong prima facie reasons for preferring humans to AIs under hedonistic utilitarianism. Under preference utilitarianism, it doesnāt necessarily matter whether AIs are conscious.
Total utilitarianism generally recommends large population sizes. Accelerating AI can be modeled as a kind of āpopulation accelerationismā. Extremely large AI populations could be preferable under utilitarianism compared to small human populations, even those with high per-capita incomes. Indeed, humans populations have recently stagnated via low population growth rates, and AI promises to lift this bottleneck.
Therefore, AI accelerationism seems straightforwardly recommended by total utilitarianism under some plausible theories.
I guess most people sympathetic to existential risk reduction think the extinction risk from AI is much higher that those from other risks (as I do). In addition, existential risk as a fraction of extinction risk is arguably way higher for AI than other risks, so the consideration you mentioned will tend to make AI existential risk even more pressing? If so, people may be more interested in either tackling AI risk, or assessing its interactions with other risks.
Yes, this seems right.
As a semi-tangential observation: your comment made me better appreciate an ambiguity in the concept of importance. When I said that this was an important consideration, I meant that it could cause us to significantly revise our estimates of impact. But by āimportant considerationā one could also mean a consideration that could cause us to significantly alter our priorities.[1] āX-risks to all life v. to humansā may be important in the first sense but not in the second sense.
Nice point, Pablo! I did not know about the post you linked, but I had noted it was not mentioned (at least very clearly) in Hilary Greavesā working paper Concepts of existential catastrophe[1] (at least in the version of September 2023)
I guess most people sympathetic to existential risk reduction think the extinction risk from AI is much higher than those from other risks (as I do). In addition, existential risk as a fraction of extinction risk is arguably way higher for AI than other risks, so the consideration you mentioned will tend to make AI existential risk even more pressing? If so, people may be more interested in either tackling AI risk, or assessing its interactions with other risks.
A related seemingly underexplored question is determining under which conditions human disempowerment (including extinction) would be bad from an impartial perspective. Humans have arguably played a role in the extinction of many species, including maybe some of the genus Homo (there are 13!), but that was not an existential risk given humans are thought to be better steerers of the future. The same might apply to AI under some conditions. Matthew Barnett has a quick take somewhat related to this. Here is the 1st paragraph:
So I sent her an email a few days ago about this.
Yes, this seems right.
As a semi-tangential observation: your comment made me better appreciate an ambiguity in the concept of importance. When I said that this was an important consideration, I meant that it could cause us to significantly revise our estimates of impact. But by āimportant considerationā one could also mean a consideration that could cause us to significantly alter our priorities.[1] āX-risks to all life v. to humansā may be important in the first sense but not in the second sense.
Perhaps one could distinguish between āaxiological importanceā and ādeontic importanceā to disambiguate these two notions.