In regards to what I meant by “short term AI capabilities”, I was referring to prosaic AGI—potentially powerful AI systems that uses current techniques instead of hypothetical new ideas surrounding how intelligence works. When you mentioned “I estimated a very rough 50% chance of AGI within 20 years, and 30-40% chance that it would be using ‘essentially current techniques’”, I took it as prosaic AGI too, but you might mean something else.
I’ve reread all the write-ups, and you’re right that they don’t imply that “research on short term AI capabilities is potentially impactful in the long term”. I really have jumped the gun there. Thanks for letting me know!
I’ve rephrased the problematic part to the following:
“Singapore’s AI research is focused more on current techniques. If you think we need to have new ideas on how intelligence works to tackle AI alignment issues, than Singapore is not a good country for that. However, if you think prosaic AGI [link to Paul’s Medium article] is a strong possibility, then working on AI alignment research in Singapore might be good.”
If you feel like this rephrasing is still problematic, please do let me know. I don’t have a strong background in AI alignment research, so I might have misunderstood some parts of it.
When you mentioned “I estimated a very rough 50% chance of AGI within 20 years, and 30-40% chance that it would be using ‘essentially current techniques’”, I took it as prosaic AGI too, but you might mean something else.
Oh yeah, that sounds correct to me. I think the issue was that I thought you meant something different from “prosaic AGI” when you were talking about “short term AI capabilities”. I do think it is very impactful to work on prosaic AGI alignment; that’s what I work on.
Your rephrasing sounds good to me—I think you can make it stronger; it is true that many researchers including me endorse working on prosaic AI alignment.
In regards to what I meant by “short term AI capabilities”, I was referring to prosaic AGI—potentially powerful AI systems that uses current techniques instead of hypothetical new ideas surrounding how intelligence works. When you mentioned “I estimated a very rough 50% chance of AGI within 20 years, and 30-40% chance that it would be using ‘essentially current techniques’”, I took it as prosaic AGI too, but you might mean something else.
I’ve reread all the write-ups, and you’re right that they don’t imply that “research on short term AI capabilities is potentially impactful in the long term”. I really have jumped the gun there. Thanks for letting me know!
I’ve rephrased the problematic part to the following:
“Singapore’s AI research is focused more on current techniques. If you think we need to have new ideas on how intelligence works to tackle AI alignment issues, than Singapore is not a good country for that. However, if you think prosaic AGI [link to Paul’s Medium article] is a strong possibility, then working on AI alignment research in Singapore might be good.”
If you feel like this rephrasing is still problematic, please do let me know. I don’t have a strong background in AI alignment research, so I might have misunderstood some parts of it.
Oh yeah, that sounds correct to me. I think the issue was that I thought you meant something different from “prosaic AGI” when you were talking about “short term AI capabilities”. I do think it is very impactful to work on prosaic AGI alignment; that’s what I work on.
Your rephrasing sounds good to me—I think you can make it stronger; it is true that many researchers including me endorse working on prosaic AI alignment.
That’s great! Thanks again for the feedback.