When you mentioned “I estimated a very rough 50% chance of AGI within 20 years, and 30-40% chance that it would be using ‘essentially current techniques’”, I took it as prosaic AGI too, but you might mean something else.
Oh yeah, that sounds correct to me. I think the issue was that I thought you meant something different from “prosaic AGI” when you were talking about “short term AI capabilities”. I do think it is very impactful to work on prosaic AGI alignment; that’s what I work on.
Your rephrasing sounds good to me—I think you can make it stronger; it is true that many researchers including me endorse working on prosaic AI alignment.
Oh yeah, that sounds correct to me. I think the issue was that I thought you meant something different from “prosaic AGI” when you were talking about “short term AI capabilities”. I do think it is very impactful to work on prosaic AGI alignment; that’s what I work on.
Your rephrasing sounds good to me—I think you can make it stronger; it is true that many researchers including me endorse working on prosaic AI alignment.
That’s great! Thanks again for the feedback.