I’m confused what you’re trying to say… Supposing we do in fact invent AGI someday, do you think this AGI won’t be able to do science? Or that it will be able to do science, but that wouldn’t count as “automating science”?
Or maybe when you said “whether ‘PASTA’ is possible at all”, you meant “whether ‘PASTA’ is possible at all via future LLMs”?
Maybe you’re assuming that everyone here has a shared assumption that we’re just talking about LLMs, and that if someone says “AI will never do X” they obviously means “LLMs will never do X”? If so, I think that’s wrong (or at least I hope it’s wrong), and I think we should be more careful with our terminology. AI is broader than LLMs. …Well maybe Aschenbrenner is thinking that way, but I bet that if you were to ask a typical senior person in AI x-risk (e.g. Karnofsky) whether it’s possible that there will be some big AI paradigm shift (away from LLMs) between now and TAI, they would say “Well yeah duh of course that’s possible,” and then they would say that they would still absolutely want to talk about and prepare for TAI, in whatever algorithmic form it might take.
Apologies for not being clear! I’ll try and be a bit more clear here, but there’s probably a lot of inferential distance here and we’re covering some quite deep topics:
Supposing we do in fact invent AGI someday, do you think this AGI won’t be able to do science? Or that it will be able to do science, but that wouldn’t count as “automating science”?
Or maybe when you said “whether ‘PASTA’ is possible at all”, you meant “whether ‘PASTA’ is possible at all via future LLMs”?
So on the first section, I’m going for the latter and taking issue with the term ‘automation’, which I think speaks to mindless, automatic process of achieving some output. But if digital functionalism were true, and we successful made a digital emulation of a human who contributed to scientific research, I wouldn’t call that ‘automating science’, instead we would have created a being that can do science. That being would be creative, agentic, with the ability to formulate it’s own novel ideas and hypotheses about the world. It’d be limited by its ability to sample from the world, design experiments, practice good epistemology, wait for physical results etc. etc. It might be the case that some scientific research happens quickly, and then subsequent breakthroughs happen more slowly, etc.
My opinions on this are also highly influenced by the works of Deutsch and Popper too, who essentially argue that the growth of knowledge cannot be predicted, and since science is (in some sense) the stock of human knowledge, and since what cannot be predicted cannot be automated, scientific ‘automation’ is in some sense impossible.
Maybe you’re assuming that everyone here has a shared assumption that we’re just talking about LLMs...but I bet that if you were to ask a typical senior person in AI x-risk (e.g. Karnofsky) whether it’s possible that there will be some big AI paradigm shift (away from LLMs) between now and TAI, they would say “Well yeah duh of course that’s possible,” and then they would say that they would still absolutely want to talk about and prepare for TAI, in whatever algorithmic form it might take.
Agreed, AI systems are larger than LLMs, and maybe I was being a bit loose with language. On the whole though, I think much of the case by proponents for the importance of working on AI Safety does assume that current paradigm + scale is all you need, or rest on works that assume it. For instance, Davidson’s Compute-Centric Framework model for OpenPhil states right in that opening page:
In this framework, AGI is developed by improving and scaling up approaches within the current ML paradigm, not by discovering new algorithmic paradigms.
And I get off the bus with this approach immediately because I don’t think that’s plausible.
As I said in my original comment, I’m working on a full post on the discussion between Chollet and Dwarkesh, which will hopefully make the AGI-sceptical position I’m coming from a bit more clear. If you end up reading it, I’d be really interested in your thoughts! :)
On the whole though, I think much of the case by proponents for the importance of working on AI Safety does assume that current paradigm + scale is all you need, or rest on works that assume it.
There were however plenty of people who were loudly arguing that it was important to work on AI x-risk before “the current paradigm” was much of a thing (or in some cases long before “the current paradigm” existed at all), and I think their arguments were sound at the time and remain sound today. (E.g. Alan Turing, Norbert Weiner, Yudkowsky, Bostrom, Stuart Russell, Tegmark…) (OpenPhil seems to have started working seriously on AI in 2016, which was 3 years before GPT-2.)
I’m confused what you’re trying to say… Supposing we do in fact invent AGI someday, do you think this AGI won’t be able to do science? Or that it will be able to do science, but that wouldn’t count as “automating science”?
Or maybe when you said “whether ‘PASTA’ is possible at all”, you meant “whether ‘PASTA’ is possible at all via future LLMs”?
Maybe you’re assuming that everyone here has a shared assumption that we’re just talking about LLMs, and that if someone says “AI will never do X” they obviously means “LLMs will never do X”? If so, I think that’s wrong (or at least I hope it’s wrong), and I think we should be more careful with our terminology. AI is broader than LLMs. …Well maybe Aschenbrenner is thinking that way, but I bet that if you were to ask a typical senior person in AI x-risk (e.g. Karnofsky) whether it’s possible that there will be some big AI paradigm shift (away from LLMs) between now and TAI, they would say “Well yeah duh of course that’s possible,” and then they would say that they would still absolutely want to talk about and prepare for TAI, in whatever algorithmic form it might take.
Apologies for not being clear! I’ll try and be a bit more clear here, but there’s probably a lot of inferential distance here and we’re covering some quite deep topics:
So on the first section, I’m going for the latter and taking issue with the term ‘automation’, which I think speaks to mindless, automatic process of achieving some output. But if digital functionalism were true, and we successful made a digital emulation of a human who contributed to scientific research, I wouldn’t call that ‘automating science’, instead we would have created a being that can do science. That being would be creative, agentic, with the ability to formulate it’s own novel ideas and hypotheses about the world. It’d be limited by its ability to sample from the world, design experiments, practice good epistemology, wait for physical results etc. etc. It might be the case that some scientific research happens quickly, and then subsequent breakthroughs happen more slowly, etc.
My opinions on this are also highly influenced by the works of Deutsch and Popper too, who essentially argue that the growth of knowledge cannot be predicted, and since science is (in some sense) the stock of human knowledge, and since what cannot be predicted cannot be automated, scientific ‘automation’ is in some sense impossible.
Agreed, AI systems are larger than LLMs, and maybe I was being a bit loose with language. On the whole though, I think much of the case by proponents for the importance of working on AI Safety does assume that current paradigm + scale is all you need, or rest on works that assume it. For instance, Davidson’s Compute-Centric Framework model for OpenPhil states right in that opening page:
And I get off the bus with this approach immediately because I don’t think that’s plausible.
As I said in my original comment, I’m working on a full post on the discussion between Chollet and Dwarkesh, which will hopefully make the AGI-sceptical position I’m coming from a bit more clear. If you end up reading it, I’d be really interested in your thoughts! :)
Yeah this is more true than I would like. I try to push back on it where possible, e.g. my post AI doom from an LLM-plateau-ist perspective.
There were however plenty of people who were loudly arguing that it was important to work on AI x-risk before “the current paradigm” was much of a thing (or in some cases long before “the current paradigm” existed at all), and I think their arguments were sound at the time and remain sound today. (E.g. Alan Turing, Norbert Weiner, Yudkowsky, Bostrom, Stuart Russell, Tegmark…) (OpenPhil seems to have started working seriously on AI in 2016, which was 3 years before GPT-2.)