I still feel mostly in agreement with those quotes (though less so than the ones in the original post).
On the first, I mostly agree that if you make an AI that’s better at coding, it will be better at coding but not necessarily anything else. The one part I disagree with is that this means “no singularity”: I don’t think this really affects the argument for a singularity, which according to me is primarily about the more ideas → more output → more “people” → more ideas positive feedback loop. I also don’t think the singularity argument or recursive self-improvement argument is that important for AI risk, as long as you believe that AI systems will become significantly more capable than humanity (see also here).
On the second, it seems very plausible that your first coding AIs are not very good at manipulating people. But it doesn’t necessarily need to manipulate people; a coding AI could hack into other servers that are not being monitored as heavily and run copies of itself there; those copies could then spend time learning and planning their next moves. (This requires some knowledge / understanding of humans, like that they would not like it if you achieved your goals, and that they are monitoring your server, but it doesn’t seem to require anywhere near human-level understanding of how to manipulate humans.)
I still feel mostly in agreement with those quotes (though less so than the ones in the original post).
On the first, I mostly agree that if you make an AI that’s better at coding, it will be better at coding but not necessarily anything else. The one part I disagree with is that this means “no singularity”: I don’t think this really affects the argument for a singularity, which according to me is primarily about the more ideas → more output → more “people” → more ideas positive feedback loop. I also don’t think the singularity argument or recursive self-improvement argument is that important for AI risk, as long as you believe that AI systems will become significantly more capable than humanity (see also here).
On the second, it seems very plausible that your first coding AIs are not very good at manipulating people. But it doesn’t necessarily need to manipulate people; a coding AI could hack into other servers that are not being monitored as heavily and run copies of itself there; those copies could then spend time learning and planning their next moves. (This requires some knowledge / understanding of humans, like that they would not like it if you achieved your goals, and that they are monitoring your server, but it doesn’t seem to require anywhere near human-level understanding of how to manipulate humans.)