Thanks for sharing, Garrison! For balance, readers may want to check Matthew Barnettās quick take on pro-AI-acceleration. Here is the 1st paragraph:
Iām curious why there hasnāt been more work exploring a pro-AI or pro-AI-acceleration position from an effective altruist perspective. Some points:
Unlike existential risk from other sources (e.g. an asteroid) AI x-risk is unique because humans would be replaced by other beings, rather than completely dying out. This means you canāt simply apply a naive argument that AI threatens total extinction of value to make the case that AI safety is astronomically important, in the sense that you can for other x-risks. You generally need additional assumptions.
Total utilitarianism is generally seen as non-speciesist, and therefore has no intrinsic preference for human values over unaligned AI values. If AIs are conscious, there donāt appear to be strong prima facie reasons for preferring humans to AIs under hedonistic utilitarianism. Under preference utilitarianism, it doesnāt necessarily matter whether AIs are conscious.
Total utilitarianism generally recommends large population sizes. Accelerating AI can be modeled as a kind of āpopulation accelerationismā. Extremely large AI populations could be preferable under utilitarianism compared to small human populations, even those with high per-capita incomes. Indeed, humans populations have recently stagnated via low population growth rates, and AI promises to lift this bottleneck.
Therefore, AI accelerationism seems straightforwardly recommended by total utilitarianism under some plausible theories.
Thanks for sharing, Garrison! For balance, readers may want to check Matthew Barnettās quick take on pro-AI-acceleration. Here is the 1st paragraph: