Is it possible that stable authoritarianism arising from AI could not be an existential catastrophe? It might be a trajectory change leading to worse outcomes, but maybe it’s not a “drastic curtailment” of our potential?
Is it possible that stable authoritarianism arising from AI could not be an existential catastrophe? It might be a trajectory change leading to worse outcomes, but maybe it’s not a “drastic curtailment” of our potential?