Perhaps at the core there is a theme here that comes up a lot which goes a bit like: Clearly there is a strong incentive to ‘work on’ any imminent and unavoidable challenge whose resolution could require or result in “hard-to-reverse decisions with important and long-lasting consequences”. Current x-risks have been established as sort of the ‘most obvious’ such challenges (in the sense that making wrong decisions potentially results in extinction, which obviously counts as ‘hard-to-reverse’ and the consequences of which are ‘long-lasting’). But can we think of any other such challenges or any other category of such challenges? I don’t know of any others that I’ve found anywhere near as convincing as the x-risk case, but I suppose that’s why the example project on case studies could be important?
Another thought I had is kind of: Why might people who have been concerned about x-risk from misaligned AI pivot to asking about these other challenges? (I’m not saying Will counts as ‘pivoting’ but just generally asking the question). I think one question I have in mind is: Is it because we have already reached a point of small (and diminishing) returns from putting today’s resources into the narrower goal of reducing x-risk from misaligned AI?
Perhaps at the core there is a theme here that comes up a lot which goes a bit like: Clearly there is a strong incentive to ‘work on’ any imminent and unavoidable challenge whose resolution could require or result in “hard-to-reverse decisions with important and long-lasting consequences”. Current x-risks have been established as sort of the ‘most obvious’ such challenges (in the sense that making wrong decisions potentially results in extinction, which obviously counts as ‘hard-to-reverse’ and the consequences of which are ‘long-lasting’). But can we think of any other such challenges or any other category of such challenges? I don’t know of any others that I’ve found anywhere near as convincing as the x-risk case, but I suppose that’s why the example project on case studies could be important?
Another thought I had is kind of: Why might people who have been concerned about x-risk from misaligned AI pivot to asking about these other challenges? (I’m not saying Will counts as ‘pivoting’ but just generally asking the question). I think one question I have in mind is: Is it because we have already reached a point of small (and diminishing) returns from putting today’s resources into the narrower goal of reducing x-risk from misaligned AI?