How likely do you think it is that we will solve the problem of AI alignment before an existential catastrophe, and why?
How likely do you think it is that we will solve the problem of AI alignment before an existential catastrophe, and why?