I won’t go through this whole post but I’ll pick out a few representative bits to reply to.
Deutsch’s idea of explanatory universality helps clarify the mistake. Persons are universal explainers. They create new explanations that were not contained in past data. This creativity is not extrapolation from a dataset. It is invention.
LLMs do not do this. They remix what exists in their training corpus. They do not originate explanatory theories.
This statement expresses a high degree of confidence in a claim that has, as far as I can tell, zero supporting evidence. I would strongly bet against the prediction that LLMs will never be able to originate an explanatory theory.
Until we understand how humans create explanatory knowledge, we cannot program that capacity.
We still don’t know how humans create language, or prove mathematical conjectures, or manipulate objects in physical space, and yet we created AIs that can do those things.
The AI 2027 paper leans heavily on forecasting. But when the subject is knowledge creation, forecasting is not just difficult. It is impossible in principle. This was one of Karl Popper’s central insights.
I am not aware of any such insight? This claim seems easily falsified by the existence of superforecasters.
And: if prediction is impossible in principle, then you can’t confidently say that ASI won’t kill everyone, therefore you should regard it as potentially dangerous. But you seem to be quite confident that you know what ASI will be like.
The rationalist story claims a superintelligent AI will likely be a moral monster. This conflicts with the claim that such a system will understand the world better than humans do.
This statement expresses a high degree of confidence in a claim that has, as far as I can tell, zero supporting evidence. I would strongly bet against the prediction that LLMs will never be able to originate an explanatory theory.
When? In 1 year, 10 years, 100 years, or 1000 years? And involving what new technological paradigms or new basic science?
I think the quoted claim is true as stated — no LLM has created any explanatory theory so far.
I am not aware of any such insight? This claim seems easily falsified by the existence of superforecasters.
I think you’re misunderstanding the argument. No superforecaster has ever predicted the content of a new scientific discovery.
I won’t go through this whole post but I’ll pick out a few representative bits to reply to.
This statement expresses a high degree of confidence in a claim that has, as far as I can tell, zero supporting evidence. I would strongly bet against the prediction that LLMs will never be able to originate an explanatory theory.
We still don’t know how humans create language, or prove mathematical conjectures, or manipulate objects in physical space, and yet we created AIs that can do those things.
I am not aware of any such insight? This claim seems easily falsified by the existence of superforecasters.
And: if prediction is impossible in principle, then you can’t confidently say that ASI won’t kill everyone, therefore you should regard it as potentially dangerous. But you seem to be quite confident that you know what ASI will be like.
https://www.lesswrong.com/w/orthogonality-thesis
When? In 1 year, 10 years, 100 years, or 1000 years? And involving what new technological paradigms or new basic science?
I think the quoted claim is true as stated — no LLM has created any explanatory theory so far.
I think you’re misunderstanding the argument. No superforecaster has ever predicted the content of a new scientific discovery.