If we don’t solve the problem of AI alignment in time, is it almost certain (i.e. 90%+ chance) that we face an existential catastrophe?
If we don’t solve the problem of AI alignment in time, is it almost certain (i.e. 90%+ chance) that we face an existential catastrophe?