I agree that the difficult part is to get to general intelligence, also regarding data. Compute, algorithms, and data availability are all needed to get to this point. It seems really hard to know beforehand what kind and how much of algorithms and data one would need. I agree that basically only one source of data, text, could well be insufficient. There was a post I read on a forum somewhere (could have been here) from someone who let GPT3 solve questions including things like ‘let all odd rows of your answer be empty’. GPT3 failed at all these kind of assignments, showing a lack of comprehension. Still, the ‘we haven’t found the asymptote’ argument from OpenAI (intelligence does increase with model size and that increase doesn’t seem to stop, implying that we’ll hit AGI eventually), is not completely unconvincing either. It bothers me that no one can completely rule out that large language models might hit AGI just by scaling them up. It doesn’t seem likely to me, but from a risk management perspective, that’s not the point. An interesting perspective I’d never heard before from intelligent people is that AGI might actually need embodiment to gather the relevant data. (They also think it would need social skills first—also an interesting thought.)
While it’s hard to know how much (and what kind of) algorithmic improvement and data is needed, it seems doable to estimate the amount of compute needed, namely what’s in a brain plus or minus a few orders of magnitude. It seems hard for me to imagine that evolution can be beaten by more than a few orders of magnitude in algorithmic efficiency (the other way round is somewhat easier to imagine, but still unlikely in a hundred year timeframe). I think people have focused on compute because it’s most forecastable, not because it would be the only part that’s important.
Still, there is a large gap between what I think are essentially thought experiments (relevant ones though!) leading to concepts such as AGI and the singularity, and actual present AI. I’m definitely interested in ideas filling that gap. I think ‘AGI safety from first principles’ by Richard Ngo is a good try, I guess you’ve read that too since it’s part of the AGI Safety Fundamentals curriculum? What did you think about it? Do you know any similar or even better papers about the topic?
It could be that belief too, yes! I think I’m a bit exceptional in the sense that I have no problem imagining human beings achieving really complex stuff, but also no problem imagining human beings failing miserably at what appear to be really easy coordination issues. My first thought when I heard about AGI, recursive self-improvement, and human extinction was ‘ah yeah that sounds like typically the kind of thing engineers/scientists would do!’ I guess some people believe engineers/scientists could never make AGI (I disagree), while others think they could, but would not be stupid enough to screw up badly enough to actually cause human extinction (I disagree).
Hi Otto, I have been wanting to reply to you for a while but I feel like my opinions keep changing so writing coherent replies is hard (but having fluid opinions in my case seems like a good thing). For example, while I still think only a precollected set of text as a data source is unsufficient for any general intelligence, maybe training a model on text and having it then interact with humans could lead it to connecting words to references (real world objects), and maybe it would not necessarily need many reference points of the language model is rich enough? This then again seems to sound a bit like the concept of imagination and I am worried I am antropomorphising in a weird way.
Anyway, I still hold the intuition that generality is not necessarily the most important in thinking about future AI scenarios – this of course is an argument towards taking AI risk more seriously, because it should be more likely someone will build advanced narrow AI or advanced AGI than just advanced AGI.
I liked “AGI safety from first principles” but I would still be reluctant to discuss it with say, my colleagues from my day job, so I think I would need something even more grounded to current tech, but I do understand why people do not keep writing that kind of papers because it does probably not directly help solving alignment.
Hey I wasn’t saying it wasn’t that great :)
I agree that the difficult part is to get to general intelligence, also regarding data. Compute, algorithms, and data availability are all needed to get to this point. It seems really hard to know beforehand what kind and how much of algorithms and data one would need. I agree that basically only one source of data, text, could well be insufficient. There was a post I read on a forum somewhere (could have been here) from someone who let GPT3 solve questions including things like ‘let all odd rows of your answer be empty’. GPT3 failed at all these kind of assignments, showing a lack of comprehension. Still, the ‘we haven’t found the asymptote’ argument from OpenAI (intelligence does increase with model size and that increase doesn’t seem to stop, implying that we’ll hit AGI eventually), is not completely unconvincing either. It bothers me that no one can completely rule out that large language models might hit AGI just by scaling them up. It doesn’t seem likely to me, but from a risk management perspective, that’s not the point. An interesting perspective I’d never heard before from intelligent people is that AGI might actually need embodiment to gather the relevant data. (They also think it would need social skills first—also an interesting thought.)
While it’s hard to know how much (and what kind of) algorithmic improvement and data is needed, it seems doable to estimate the amount of compute needed, namely what’s in a brain plus or minus a few orders of magnitude. It seems hard for me to imagine that evolution can be beaten by more than a few orders of magnitude in algorithmic efficiency (the other way round is somewhat easier to imagine, but still unlikely in a hundred year timeframe). I think people have focused on compute because it’s most forecastable, not because it would be the only part that’s important.
Still, there is a large gap between what I think are essentially thought experiments (relevant ones though!) leading to concepts such as AGI and the singularity, and actual present AI. I’m definitely interested in ideas filling that gap. I think ‘AGI safety from first principles’ by Richard Ngo is a good try, I guess you’ve read that too since it’s part of the AGI Safety Fundamentals curriculum? What did you think about it? Do you know any similar or even better papers about the topic?
It could be that belief too, yes! I think I’m a bit exceptional in the sense that I have no problem imagining human beings achieving really complex stuff, but also no problem imagining human beings failing miserably at what appear to be really easy coordination issues. My first thought when I heard about AGI, recursive self-improvement, and human extinction was ‘ah yeah that sounds like typically the kind of thing engineers/scientists would do!’ I guess some people believe engineers/scientists could never make AGI (I disagree), while others think they could, but would not be stupid enough to screw up badly enough to actually cause human extinction (I disagree).
Hi Otto, I have been wanting to reply to you for a while but I feel like my opinions keep changing so writing coherent replies is hard (but having fluid opinions in my case seems like a good thing). For example, while I still think only a precollected set of text as a data source is unsufficient for any general intelligence, maybe training a model on text and having it then interact with humans could lead it to connecting words to references (real world objects), and maybe it would not necessarily need many reference points of the language model is rich enough? This then again seems to sound a bit like the concept of imagination and I am worried I am antropomorphising in a weird way.
Anyway, I still hold the intuition that generality is not necessarily the most important in thinking about future AI scenarios – this of course is an argument towards taking AI risk more seriously, because it should be more likely someone will build advanced narrow AI or advanced AGI than just advanced AGI.
I liked “AGI safety from first principles” but I would still be reluctant to discuss it with say, my colleagues from my day job, so I think I would need something even more grounded to current tech, but I do understand why people do not keep writing that kind of papers because it does probably not directly help solving alignment.