Thanks for this, I can see how that could be confusing language. I’ve changed “this would be enough to develop transformative AI” to “transformative AI would (likely) follow” and cut “But in fact” from the next bullet point. (I’ve only made these changes at the Cold Takes version; editing this version can cause bugs.)
I agree directionally with the points you make about “many transformative tasks” and “point of no return,” but I still think AI systems would have to be a great deal more capable than today’s—likely with a pretty high degree of generality (or at least far more sample-efficient learning than we see today) - to get us to that point.
Update: I thought about it a bit more & asked this question & got some useful feedback, especially from tin482 and vladimir_nesov. I now am confused about what people mean when they say current AI systems are much less sample-efficient than humans. On some interpretations, GPT-3 is already about as sample-efficient as humans. My guess is it’s something like: “Sure, GPT-3 can see a name or fact once in its dataset and then remember it later & integrate it with the rest of its knowledge. But that’s because it’s part of the general skill/task of predicting text. For new skills/tasks, GPT-3 would need huge amounts of fine-tuning data to perform acceptably.”
The sample-efficient learning thing is an interesting crux. I tentatively agree with you that it seems hard for AIs that are as sample-inefficient as todays to be dangerous. However… on my todo list is to interrogate that. In my “median future” story, for example, we have chatbots that are talking to millions of people every day and online-learning from those interactions. Maybe it can make up in quantity what it lacks in quality, so to speak—maybe it can keep up with world affairs and react to recent developments via seeing millions of data points about it, rather than by seeing one data point and being sample-efficient. Idk.
Thanks for this, I can see how that could be confusing language. I’ve changed “this would be enough to develop transformative AI” to “transformative AI would (likely) follow” and cut “But in fact” from the next bullet point. (I’ve only made these changes at the Cold Takes version; editing this version can cause bugs.)
I agree directionally with the points you make about “many transformative tasks” and “point of no return,” but I still think AI systems would have to be a great deal more capable than today’s—likely with a pretty high degree of generality (or at least far more sample-efficient learning than we see today) - to get us to that point.
Update: I thought about it a bit more & asked this question & got some useful feedback, especially from tin482 and vladimir_nesov. I now am confused about what people mean when they say current AI systems are much less sample-efficient than humans. On some interpretations, GPT-3 is already about as sample-efficient as humans. My guess is it’s something like: “Sure, GPT-3 can see a name or fact once in its dataset and then remember it later & integrate it with the rest of its knowledge. But that’s because it’s part of the general skill/task of predicting text. For new skills/tasks, GPT-3 would need huge amounts of fine-tuning data to perform acceptably.”
Surely a big part of the resolution is that GPT-3 is sample-inefficient in total, but sample-efficient on the margin?
Excellent, thanks!
The sample-efficient learning thing is an interesting crux. I tentatively agree with you that it seems hard for AIs that are as sample-inefficient as todays to be dangerous. However… on my todo list is to interrogate that. In my “median future” story, for example, we have chatbots that are talking to millions of people every day and online-learning from those interactions. Maybe it can make up in quantity what it lacks in quality, so to speak—maybe it can keep up with world affairs and react to recent developments via seeing millions of data points about it, rather than by seeing one data point and being sample-efficient. Idk.