I hope I’m wrong, but I suspect people are downvoting this post for not being highly familiar with insider EA jargon and arguments around AI, and pattern matching to 101 level objections like “machines can’t have souls”.
I do disagree with a lot of the arguments made in the post. For example, I think machine learning is fundamentally different to regular programming, in that it’s a hill-climbing trial and error machine, not just a set of commands.
However I think a large part of the post is actually correct. Major conceptual breakthroughs will be required to turn the current AI tech into anything resembling AGI, and it’s very hard to know when or if those breakthroughs will occur. That last sentence was basically a paraphrase of Stuart Russell, btw, so it’s not just AI-risk skeptics that are saying it. It is entirely possible that we get stuck again, and it’ll take a greater understanding of the human brain to get out of it.
I’d say a crux is I believe the reason AI is at all useful in real tasks is something like 50% compute, 45% data, and 5% conceptual. In other words, compute and data were the big bottlenecks to usefulness, and I give an 80% that in 10 years time they will still be bottlenecks.
I’d say the only conceptual idea was the idea of a neural network at all, and everything since is just scaling at work.
Yet I think there is a simpler reason why this post is downvoted, and there are 3 reasons:
It’s way longer than necessary.
Even compared to the unfortunately speculative evidence base of the average AGI post, this is one of the worst. It basically merely asserts that AGI won’t come, and makes statements, but 0 evidence is there.
Some portions of his argument don’t really relate to his main thesis. This is especially so for the Bayesian section, where that section is a derail from his main point.
Overall, I’d strongly downvote the post for these reasons.
I understand, I struggled to find a way to make it shorter. I actually thought it needed to be longer, to make more explicit each section. I thought that if I could explain in detail, a lot more of this worldview would make sense to more people. If you could give me an example on how to condense a section and keep the message intact please let me know. It’s a challenge that I’m working on.
2. Even compared to the unfortunately speculative evidence base of the average AGI post, this is one of the worst. It basically merely asserts that AGI won’t come, and makes statements, but 0 evidence is there.
On the contrary I believe AGI will come. I wrote this in may essay. AGI is possible. But I don’t think it will come spontaneously. We will need the required knowledge to program it.
3. Some portions of his argument don’t really relate to his main thesis. This is especially so for the Bayesian section, where that section is a derail from his main point.
I can see how I leaned very heavily on the Bayesian section (my wife had the same critique) but I felt it important to stress the differing approaches to scientific understandings, between Bayesianism and Fallibilism. I’m under the impression many people don’t know the differences.
and pattern matching to 101 level objections like “machines can’t have souls”.
I don’t think the readers are pattern matching to “machines can’t have souls”, but some of the readers probably pattern match to the “humans need to figure out free will and consciousness before they can build AGI” claim. Imo they would not be completely wrong to perform this pattern matching if they give the post a brief skim.
I hope I’m wrong, but I suspect people are downvoting this post for not being highly familiar with insider EA jargon and arguments around AI, and pattern matching to 101 level objections like “machines can’t have souls”.
I do disagree with a lot of the arguments made in the post. For example, I think machine learning is fundamentally different to regular programming, in that it’s a hill-climbing trial and error machine, not just a set of commands.
However I think a large part of the post is actually correct. Major conceptual breakthroughs will be required to turn the current AI tech into anything resembling AGI, and it’s very hard to know when or if those breakthroughs will occur. That last sentence was basically a paraphrase of Stuart Russell, btw, so it’s not just AI-risk skeptics that are saying it. It is entirely possible that we get stuck again, and it’ll take a greater understanding of the human brain to get out of it.
Thanks, that was sort of my sense. I appreciate that you are both sharing what you agree and disagree with too. It’s refreshing!
I’d say a crux is I believe the reason AI is at all useful in real tasks is something like 50% compute, 45% data, and 5% conceptual. In other words, compute and data were the big bottlenecks to usefulness, and I give an 80% that in 10 years time they will still be bottlenecks.
I’d say the only conceptual idea was the idea of a neural network at all, and everything since is just scaling at work.
Yet I think there is a simpler reason why this post is downvoted, and there are 3 reasons:
It’s way longer than necessary.
Even compared to the unfortunately speculative evidence base of the average AGI post, this is one of the worst. It basically merely asserts that AGI won’t come, and makes statements, but 0 evidence is there.
Some portions of his argument don’t really relate to his main thesis. This is especially so for the Bayesian section, where that section is a derail from his main point.
Overall, I’d strongly downvote the post for these reasons.
Please see my response in Bold.
It’s way longer than necessary.
I understand, I struggled to find a way to make it shorter. I actually thought it needed to be longer, to make more explicit each section. I thought that if I could explain in detail, a lot more of this worldview would make sense to more people. If you could give me an example on how to condense a section and keep the message intact please let me know. It’s a challenge that I’m working on.
2. Even compared to the unfortunately speculative evidence base of the average AGI post, this is one of the worst. It basically merely asserts that AGI won’t come, and makes statements, but 0 evidence is there.
On the contrary I believe AGI will come. I wrote this in may essay. AGI is possible. But I don’t think it will come spontaneously. We will need the required knowledge to program it.
3. Some portions of his argument don’t really relate to his main thesis. This is especially so for the Bayesian section, where that section is a derail from his main point.
I can see how I leaned very heavily on the Bayesian section (my wife had the same critique) but I felt it important to stress the differing approaches to scientific understandings, between Bayesianism and Fallibilism. I’m under the impression many people don’t know the differences.
I don’t think the readers are pattern matching to “machines can’t have souls”, but some of the readers probably pattern match to the “humans need to figure out free will and consciousness before they can build AGI” claim. Imo they would not be completely wrong to perform this pattern matching if they give the post a brief skim.