However, such research on short term AI capabilities is potentially impactful in the long term too, according to some AI researchers like Paul Christiano, Ian Goodfellow, and Rohin Shah.
Huh, I don’t see where I said anything that implied that? (I just reread the summary that you linked.)
I’m not entirely sure what you mean by “short term AI capabilities”. The context suggests you mean “AI-related problems that will arise soon that aren’t about x-risk”. If so, under a longtermist perspective, I think that work addressing such problems is better than nothing, but I expect that focusing on x-risk in particular will lead to orders of magnitude more (expected) impact.
(I also don’t think the post you linked for Paul implies the statement you made either, unless I’m misunderstanding something.)
In regards to what I meant by “short term AI capabilities”, I was referring to prosaic AGI—potentially powerful AI systems that uses current techniques instead of hypothetical new ideas surrounding how intelligence works. When you mentioned “I estimated a very rough 50% chance of AGI within 20 years, and 30-40% chance that it would be using ‘essentially current techniques’”, I took it as prosaic AGI too, but you might mean something else.
I’ve reread all the write-ups, and you’re right that they don’t imply that “research on short term AI capabilities is potentially impactful in the long term”. I really have jumped the gun there. Thanks for letting me know!
I’ve rephrased the problematic part to the following:
“Singapore’s AI research is focused more on current techniques. If you think we need to have new ideas on how intelligence works to tackle AI alignment issues, than Singapore is not a good country for that. However, if you think prosaic AGI [link to Paul’s Medium article] is a strong possibility, then working on AI alignment research in Singapore might be good.”
If you feel like this rephrasing is still problematic, please do let me know. I don’t have a strong background in AI alignment research, so I might have misunderstood some parts of it.
When you mentioned “I estimated a very rough 50% chance of AGI within 20 years, and 30-40% chance that it would be using ‘essentially current techniques’”, I took it as prosaic AGI too, but you might mean something else.
Oh yeah, that sounds correct to me. I think the issue was that I thought you meant something different from “prosaic AGI” when you were talking about “short term AI capabilities”. I do think it is very impactful to work on prosaic AGI alignment; that’s what I work on.
Your rephrasing sounds good to me—I think you can make it stronger; it is true that many researchers including me endorse working on prosaic AI alignment.
Huh, I don’t see where I said anything that implied that? (I just reread the summary that you linked.)
I’m not entirely sure what you mean by “short term AI capabilities”. The context suggests you mean “AI-related problems that will arise soon that aren’t about x-risk”. If so, under a longtermist perspective, I think that work addressing such problems is better than nothing, but I expect that focusing on x-risk in particular will lead to orders of magnitude more (expected) impact.
(I also don’t think the post you linked for Paul implies the statement you made either, unless I’m misunderstanding something.)
In regards to what I meant by “short term AI capabilities”, I was referring to prosaic AGI—potentially powerful AI systems that uses current techniques instead of hypothetical new ideas surrounding how intelligence works. When you mentioned “I estimated a very rough 50% chance of AGI within 20 years, and 30-40% chance that it would be using ‘essentially current techniques’”, I took it as prosaic AGI too, but you might mean something else.
I’ve reread all the write-ups, and you’re right that they don’t imply that “research on short term AI capabilities is potentially impactful in the long term”. I really have jumped the gun there. Thanks for letting me know!
I’ve rephrased the problematic part to the following:
“Singapore’s AI research is focused more on current techniques. If you think we need to have new ideas on how intelligence works to tackle AI alignment issues, than Singapore is not a good country for that. However, if you think prosaic AGI [link to Paul’s Medium article] is a strong possibility, then working on AI alignment research in Singapore might be good.”
If you feel like this rephrasing is still problematic, please do let me know. I don’t have a strong background in AI alignment research, so I might have misunderstood some parts of it.
Oh yeah, that sounds correct to me. I think the issue was that I thought you meant something different from “prosaic AGI” when you were talking about “short term AI capabilities”. I do think it is very impactful to work on prosaic AGI alignment; that’s what I work on.
Your rephrasing sounds good to me—I think you can make it stronger; it is true that many researchers including me endorse working on prosaic AI alignment.
That’s great! Thanks again for the feedback.