> For an intervention to be a longtermist priority, there needs to be some kind of concrete story for how it improves the long-term future.
I disagree with this. With existential risk from unaligned AI, I don’t think anyone has ever told a very clear story about how AI will actually get misaligned, get loose, and kill everyone.
When I read the passage you quoted I thought of e.g. Critch’s description of RAAPs and Christiano’s what failure looks like, both of which seem pretty detailed to me without necessarily fitting the “AI gets misaligned, gets loose and kills everyone” meme; both Critch and Christiano seem to me to be explicitly pushing back against consideration of only that meme, and Critch in particular thinks work in this area is ~neglected (as of 2021, I haven’t kept up with goings-on). I suppose Gwern’s writeup comes closest to your description, and I can’t imagine it being more concrete; curious to hear if you have a different reaction.
When I read the passage you quoted I thought of e.g. Critch’s description of RAAPs and Christiano’s what failure looks like, both of which seem pretty detailed to me without necessarily fitting the “AI gets misaligned, gets loose and kills everyone” meme; both Critch and Christiano seem to me to be explicitly pushing back against consideration of only that meme, and Critch in particular thinks work in this area is ~neglected (as of 2021, I haven’t kept up with goings-on). I suppose Gwern’s writeup comes closest to your description, and I can’t imagine it being more concrete; curious to hear if you have a different reaction.