Even though you disagreed with my post, I was touched to see that it was one of the “top” posts that you disagreed with :). However, I’m really struggling to see the connection between my argument and Deutsch’s views on AI and universal explainers. There’s nothing in the piece that you link to about complexity classes or efficiency limits on algorithms.
The basic answer is, computational complexity matters less than you think, primarily because it makes very strong assumptions, and even one of those assumptions failing weakens it’s power.
The assumptions are:
Worst case scenarios. In this setting, everything matters, so anything that scales badly will impact the overall problem.
Exactly optimal, deterministic solutions are required.
You have only one shot to solve the problem.
Small advantages do not compound into big advantages.
Linear returns are the best you can do.
This is a conjunctive argument, where if one of the premises are wrong, than the entire argument gets weaker.
And given the conjunction fallacy, we should be wary of accepting such a story.
Even though you disagreed with my post, I was touched to see that it was one of the “top” posts that you disagreed with :). However, I’m really struggling to see the connection between my argument and Deutsch’s views on AI and universal explainers. There’s nothing in the piece that you link to about complexity classes or efficiency limits on algorithms.
You are totally right, Deutsch’s argument is computability, not complexity. Pardon!
Serves me right for trying to recap 1 of 170 posts from memory.
The basic answer is, computational complexity matters less than you think, primarily because it makes very strong assumptions, and even one of those assumptions failing weakens it’s power.
The assumptions are:
Worst case scenarios. In this setting, everything matters, so anything that scales badly will impact the overall problem.
Exactly optimal, deterministic solutions are required.
You have only one shot to solve the problem.
Small advantages do not compound into big advantages.
Linear returns are the best you can do.
This is a conjunctive argument, where if one of the premises are wrong, than the entire argument gets weaker.
And given the conjunction fallacy, we should be wary of accepting such a story.
Link to more resources here:
https://www.gwern.net/Complexity-vs-AI#complexity-caveats