Why is this post being downvoted and no one is commenting to explain why they disagree and/or feel like it’s a bad post? This is why I dislike the voting system, it takes from people having to actually engage with others they disagree with (which is how we will make progress) !! It would also be helpful for someone like me (who is not in the AI space) to understand what it is people are disagreeing with about this post? I understand some people don’t feel that certain posts are worth engaging with (which is fine) but at least don’t downvote then?
I understand some people don’t feel that certain posts are worth engaging with (which is fine) but at least don’t downvote then?
I disagree, I think it’s perfectly fine for people to downvote posts without commenting. The key function of the karma system is to control how many people see a given piece of content, so I think up-/downvotes should reflect “should this content be seen by more people here?” If I think a post clearly isn’t worth reading (and in particular not engaging with), then IMO it makes complete sense to downvote so that fewer other people spend time on it. In contrast, if I disagree with a post but think it’s well-argued and worth engaging with I would not downvote, and would engage in the comments instead.
When I see a net negative karma post, one of the first things I do is check the comments to see why people are downvoting it. Comments are much better than votes as a signal of the usefulness of a post. Note also that:
I might disagree with the comment, giving me evidence to ignore the downvotes and read the post.
I’m especially interested in reading worthwhile posts with downvotes, because they might contain counterarguments to trendy ideas that people endorse without sufficient scrutiny.
Without comments, downvotes are anonymous. For all I know, the downvoters might have acted after reading a few sentences. Or they might be angry at the poster for personal reasons unrelated to the post. Or they might hold a lot of beliefs that I think are incorrect.
Not sure how the EA Forum algorithm works, but it might be the case that fewer people see a post with downvotes, leading to a feedback loop that can bury a good idea before anyone credible reads it.
In the best case, a comment summarizes the main ideas of the post. Even if the main ideas are clearly wrong, I’d rather hear about them so I can go “ah right, another argument of that form, those tend to be flawed” or “wait a minute, why is that flawed again? Let me think about it.”
At the very least, a comment tells me why the post got downvotes. Without any comments, I have to either (a) blindly trust the downvoters or (b) read some of the (possibly low quality) post.
Comments can save time for everyone else. See (5) and (6).
Comments are easy! I don’t think anyone should downvote without having some reason for downvoting. If you have a reason for downvoting, you can probably spell this reason out with a short comment. This should take a minute or less.
All that being said, I can’t remember any downvoted posts that I enjoyed reading. However, I rarely read downvoted posts because (a) I don’t see many of them and (b) they often have comments.
Oh, I agree that comment + downvote is more useful for others than only downvote, my main claim was that only downvote is more useful than nothing. So I don’t want there to be a norm that you need to comment when downvoting, if that leads to fewer people voting (which I think would be likely). See Well-Kept Gardens Die By Pacifism for some background on why I think that would be really bad.
Tbc, I don’t want to discourage commenting to explain votes, I just think the decision of whether that is worth your time should be up to you.
I hope I’m wrong, but I suspect people are downvoting this post for not being highly familiar with insider EA jargon and arguments around AI, and pattern matching to 101 level objections like “machines can’t have souls”.
I do disagree with a lot of the arguments made in the post. For example, I think machine learning is fundamentally different to regular programming, in that it’s a hill-climbing trial and error machine, not just a set of commands.
However I think a large part of the post is actually correct. Major conceptual breakthroughs will be required to turn the current AI tech into anything resembling AGI, and it’s very hard to know when or if those breakthroughs will occur. That last sentence was basically a paraphrase of Stuart Russell, btw, so it’s not just AI-risk skeptics that are saying it. It is entirely possible that we get stuck again, and it’ll take a greater understanding of the human brain to get out of it.
I’d say a crux is I believe the reason AI is at all useful in real tasks is something like 50% compute, 45% data, and 5% conceptual. In other words, compute and data were the big bottlenecks to usefulness, and I give an 80% that in 10 years time they will still be bottlenecks.
I’d say the only conceptual idea was the idea of a neural network at all, and everything since is just scaling at work.
Yet I think there is a simpler reason why this post is downvoted, and there are 3 reasons:
It’s way longer than necessary.
Even compared to the unfortunately speculative evidence base of the average AGI post, this is one of the worst. It basically merely asserts that AGI won’t come, and makes statements, but 0 evidence is there.
Some portions of his argument don’t really relate to his main thesis. This is especially so for the Bayesian section, where that section is a derail from his main point.
Overall, I’d strongly downvote the post for these reasons.
I understand, I struggled to find a way to make it shorter. I actually thought it needed to be longer, to make more explicit each section. I thought that if I could explain in detail, a lot more of this worldview would make sense to more people. If you could give me an example on how to condense a section and keep the message intact please let me know. It’s a challenge that I’m working on.
2. Even compared to the unfortunately speculative evidence base of the average AGI post, this is one of the worst. It basically merely asserts that AGI won’t come, and makes statements, but 0 evidence is there.
On the contrary I believe AGI will come. I wrote this in may essay. AGI is possible. But I don’t think it will come spontaneously. We will need the required knowledge to program it.
3. Some portions of his argument don’t really relate to his main thesis. This is especially so for the Bayesian section, where that section is a derail from his main point.
I can see how I leaned very heavily on the Bayesian section (my wife had the same critique) but I felt it important to stress the differing approaches to scientific understandings, between Bayesianism and Fallibilism. I’m under the impression many people don’t know the differences.
and pattern matching to 101 level objections like “machines can’t have souls”.
I don’t think the readers are pattern matching to “machines can’t have souls”, but some of the readers probably pattern match to the “humans need to figure out free will and consciousness before they can build AGI” claim. Imo they would not be completely wrong to perform this pattern matching if they give the post a brief skim.
Why is this post being downvoted and no one is commenting to explain why they disagree and/or feel like it’s a bad post? This is why I dislike the voting system, it takes from people having to actually engage with others they disagree with (which is how we will make progress) !! It would also be helpful for someone like me (who is not in the AI space) to understand what it is people are disagreeing with about this post? I understand some people don’t feel that certain posts are worth engaging with (which is fine) but at least don’t downvote then?
I disagree, I think it’s perfectly fine for people to downvote posts without commenting. The key function of the karma system is to control how many people see a given piece of content, so I think up-/downvotes should reflect “should this content be seen by more people here?” If I think a post clearly isn’t worth reading (and in particular not engaging with), then IMO it makes complete sense to downvote so that fewer other people spend time on it. In contrast, if I disagree with a post but think it’s well-argued and worth engaging with I would not downvote, and would engage in the comments instead.
When I see a net negative karma post, one of the first things I do is check the comments to see why people are downvoting it. Comments are much better than votes as a signal of the usefulness of a post. Note also that:
I might disagree with the comment, giving me evidence to ignore the downvotes and read the post.
I’m especially interested in reading worthwhile posts with downvotes, because they might contain counterarguments to trendy ideas that people endorse without sufficient scrutiny.
Without comments, downvotes are anonymous. For all I know, the downvoters might have acted after reading a few sentences. Or they might be angry at the poster for personal reasons unrelated to the post. Or they might hold a lot of beliefs that I think are incorrect.
Not sure how the EA Forum algorithm works, but it might be the case that fewer people see a post with downvotes, leading to a feedback loop that can bury a good idea before anyone credible reads it.
In the best case, a comment summarizes the main ideas of the post. Even if the main ideas are clearly wrong, I’d rather hear about them so I can go “ah right, another argument of that form, those tend to be flawed” or “wait a minute, why is that flawed again? Let me think about it.”
At the very least, a comment tells me why the post got downvotes. Without any comments, I have to either (a) blindly trust the downvoters or (b) read some of the (possibly low quality) post.
Comments can save time for everyone else. See (5) and (6).
Comments are easy! I don’t think anyone should downvote without having some reason for downvoting. If you have a reason for downvoting, you can probably spell this reason out with a short comment. This should take a minute or less.
All that being said, I can’t remember any downvoted posts that I enjoyed reading. However, I rarely read downvoted posts because (a) I don’t see many of them and (b) they often have comments.
Oh, I agree that comment + downvote is more useful for others than only downvote, my main claim was that only downvote is more useful than nothing. So I don’t want there to be a norm that you need to comment when downvoting, if that leads to fewer people voting (which I think would be likely). See Well-Kept Gardens Die By Pacifism for some background on why I think that would be really bad.
Tbc, I don’t want to discourage commenting to explain votes, I just think the decision of whether that is worth your time should be up to you.
I hope I’m wrong, but I suspect people are downvoting this post for not being highly familiar with insider EA jargon and arguments around AI, and pattern matching to 101 level objections like “machines can’t have souls”.
I do disagree with a lot of the arguments made in the post. For example, I think machine learning is fundamentally different to regular programming, in that it’s a hill-climbing trial and error machine, not just a set of commands.
However I think a large part of the post is actually correct. Major conceptual breakthroughs will be required to turn the current AI tech into anything resembling AGI, and it’s very hard to know when or if those breakthroughs will occur. That last sentence was basically a paraphrase of Stuart Russell, btw, so it’s not just AI-risk skeptics that are saying it. It is entirely possible that we get stuck again, and it’ll take a greater understanding of the human brain to get out of it.
Thanks, that was sort of my sense. I appreciate that you are both sharing what you agree and disagree with too. It’s refreshing!
I’d say a crux is I believe the reason AI is at all useful in real tasks is something like 50% compute, 45% data, and 5% conceptual. In other words, compute and data were the big bottlenecks to usefulness, and I give an 80% that in 10 years time they will still be bottlenecks.
I’d say the only conceptual idea was the idea of a neural network at all, and everything since is just scaling at work.
Yet I think there is a simpler reason why this post is downvoted, and there are 3 reasons:
It’s way longer than necessary.
Even compared to the unfortunately speculative evidence base of the average AGI post, this is one of the worst. It basically merely asserts that AGI won’t come, and makes statements, but 0 evidence is there.
Some portions of his argument don’t really relate to his main thesis. This is especially so for the Bayesian section, where that section is a derail from his main point.
Overall, I’d strongly downvote the post for these reasons.
Please see my response in Bold.
It’s way longer than necessary.
I understand, I struggled to find a way to make it shorter. I actually thought it needed to be longer, to make more explicit each section. I thought that if I could explain in detail, a lot more of this worldview would make sense to more people. If you could give me an example on how to condense a section and keep the message intact please let me know. It’s a challenge that I’m working on.
2. Even compared to the unfortunately speculative evidence base of the average AGI post, this is one of the worst. It basically merely asserts that AGI won’t come, and makes statements, but 0 evidence is there.
On the contrary I believe AGI will come. I wrote this in may essay. AGI is possible. But I don’t think it will come spontaneously. We will need the required knowledge to program it.
3. Some portions of his argument don’t really relate to his main thesis. This is especially so for the Bayesian section, where that section is a derail from his main point.
I can see how I leaned very heavily on the Bayesian section (my wife had the same critique) but I felt it important to stress the differing approaches to scientific understandings, between Bayesianism and Fallibilism. I’m under the impression many people don’t know the differences.
I don’t think the readers are pattern matching to “machines can’t have souls”, but some of the readers probably pattern match to the “humans need to figure out free will and consciousness before they can build AGI” claim. Imo they would not be completely wrong to perform this pattern matching if they give the post a brief skim.