I have a vague impression—I forget from where and it may well be false—that Nora has read some of my AI alignment research, and that she thinks of it as not entirely pointless. If so, then when I say “pre-2020 MIRI (esp. Abram & Eliezer) deserve some share of the credit for my thinking”, then that’s meaningful, because there is in fact some nonzero credit to be given. Conversely, if you (or anyone) don’t know anything about my AI alignment research, or think it’s dumb, then you should ignore that part of my comment, it’s not offering any evidence, it would just be saying that useless research can sometimes lead to further useless research, which is obvious! :)
I probably think less of current “empirical” research than you, because I don’t think AGI will look and act and be built just like today’s LLMs but better / larger. I expect highly-alignment-relevant differences between here and there, including (among other things) reinforcement learning being involved in a much more central way than it is today (i.e. RLHF fine-tuning). This is a big topic where I think reasonable people disagree and maybe this comment section isn’t a great place to hash it out. ¯\_(ツ)_/¯
My own research doesn’t involve LLMs and could have been done in 2017, but I’m not sure I would call it “purely conceptual”—it involves a lot of stuff like scrutinizing data tables in experimental neuroscience papers. The ELK research project led by Paul Christiano also could have been done 2017, as far as I can tell, but lots of people seem to think it’s worthwhile; do you? (Paul is a coinventor of RLHF.)
I’ve certainly heard of your work but it’s far enough out of my research interests that I’ve never taken a particularly strong interest. Writing this in this context makes me realise I might have made a bit of a one-man echo chamber for myself… Do you mind if we leave this as ‘undecided’ for a while?
Regarding ELK—I think the core of the problem as I understand it is fairly clear once you begin thinking about interpretability. Understanding the relation between AI and human ontologies was part of the motivation behind my work on alphazero (as well as an interest in the natural abstractions hypothesis). Section 4 “Encoding of human conceptual knowledge” and Section 8 “Exploring activations with unsupervised methods” are the places to look. The section on challenges and limitations in concept probing I think echoes a lot of the concerns in ELK.
In terms of subsequent work on ELK, I don’t think much of the work on solving ELK was particularly useful, and often reinvented existing methods (e.g. sparse probing, causal interchange interventions). If I were to try and work on it then I think the best way to do so would be to embed the core challenge in a tractable research program, for instance trying to extract new scientific knowledge from ML models like alphafold.
To move this in a more positive direction, the most fruitful/exciting conceptual work I’ve seen is probably (1) the natural abstractions hypothesis and (2) debate. When I think a bit about why I particularly like these, for (1) it’s because it seems plausibly true, extremely useful if true, and amenable to both formal theoretical work and empirical study. For (2) it’s because it’s a pretty striking new idea that seems very powerful/scalable, but also can be put into practice a bit ahead of really powerful systems.
I have a vague impression—I forget from where and it may well be false—that Nora has read some of my AI alignment research, and that she thinks of it as not entirely pointless. If so, then when I say “pre-2020 MIRI (esp. Abram & Eliezer) deserve some share of the credit for my thinking”, then that’s meaningful, because there is in fact some nonzero credit to be given. Conversely, if you (or anyone) don’t know anything about my AI alignment research, or think it’s dumb, then you should ignore that part of my comment, it’s not offering any evidence, it would just be saying that useless research can sometimes lead to further useless research, which is obvious! :)
I probably think less of current “empirical” research than you, because I don’t think AGI will look and act and be built just like today’s LLMs but better / larger. I expect highly-alignment-relevant differences between here and there, including (among other things) reinforcement learning being involved in a much more central way than it is today (i.e. RLHF fine-tuning). This is a big topic where I think reasonable people disagree and maybe this comment section isn’t a great place to hash it out. ¯\_(ツ)_/¯
My own research doesn’t involve LLMs and could have been done in 2017, but I’m not sure I would call it “purely conceptual”—it involves a lot of stuff like scrutinizing data tables in experimental neuroscience papers. The ELK research project led by Paul Christiano also could have been done 2017, as far as I can tell, but lots of people seem to think it’s worthwhile; do you? (Paul is a coinventor of RLHF.)
I’ve certainly heard of your work but it’s far enough out of my research interests that I’ve never taken a particularly strong interest. Writing this in this context makes me realise I might have made a bit of a one-man echo chamber for myself… Do you mind if we leave this as ‘undecided’ for a while?
Regarding ELK—I think the core of the problem as I understand it is fairly clear once you begin thinking about interpretability. Understanding the relation between AI and human ontologies was part of the motivation behind my work on alphazero (as well as an interest in the natural abstractions hypothesis). Section 4 “Encoding of human conceptual knowledge” and Section 8 “Exploring activations with unsupervised methods” are the places to look. The section on challenges and limitations in concept probing I think echoes a lot of the concerns in ELK.
In terms of subsequent work on ELK, I don’t think much of the work on solving ELK was particularly useful, and often reinvented existing methods (e.g. sparse probing, causal interchange interventions). If I were to try and work on it then I think the best way to do so would be to embed the core challenge in a tractable research program, for instance trying to extract new scientific knowledge from ML models like alphafold.
To move this in a more positive direction, the most fruitful/exciting conceptual work I’ve seen is probably (1) the natural abstractions hypothesis and (2) debate. When I think a bit about why I particularly like these, for (1) it’s because it seems plausibly true, extremely useful if true, and amenable to both formal theoretical work and empirical study. For (2) it’s because it’s a pretty striking new idea that seems very powerful/scalable, but also can be put into practice a bit ahead of really powerful systems.